00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1065 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.221 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.222 The recommended git tool is: git 00:00:00.222 using credential 00000000-0000-0000-0000-000000000002 00:00:00.224 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.276 Fetching changes from the remote Git repository 00:00:00.278 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.311 Using shallow fetch with depth 1 00:00:00.311 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.311 > git --version # timeout=10 00:00:00.347 > git --version # 'git version 2.39.2' 00:00:00.347 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.372 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.372 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.126 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.137 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.149 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.149 > git config core.sparsecheckout # timeout=10 00:00:07.160 > git read-tree -mu HEAD # timeout=10 00:00:07.176 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.198 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.199 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.277 [Pipeline] Start of Pipeline 00:00:07.289 [Pipeline] library 00:00:07.291 Loading library shm_lib@master 00:00:07.292 Library shm_lib@master is cached. Copying from home. 00:00:07.332 [Pipeline] node 00:00:07.358 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.360 [Pipeline] { 00:00:07.368 [Pipeline] catchError 00:00:07.369 [Pipeline] { 00:00:07.378 [Pipeline] wrap 00:00:07.384 [Pipeline] { 00:00:07.390 [Pipeline] stage 00:00:07.392 [Pipeline] { (Prologue) 00:00:07.591 [Pipeline] sh 00:00:07.878 + logger -p user.info -t JENKINS-CI 00:00:07.895 [Pipeline] echo 00:00:07.897 Node: WFP4 00:00:07.903 [Pipeline] sh 00:00:08.199 [Pipeline] setCustomBuildProperty 00:00:08.209 [Pipeline] echo 00:00:08.210 Cleanup processes 00:00:08.214 [Pipeline] sh 00:00:08.497 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.497 674169 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.508 [Pipeline] sh 00:00:08.791 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.792 ++ grep -v 'sudo pgrep' 00:00:08.792 ++ awk '{print $1}' 00:00:08.792 + sudo kill -9 00:00:08.792 + true 00:00:08.810 [Pipeline] cleanWs 00:00:08.823 [WS-CLEANUP] Deleting project workspace... 00:00:08.823 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.830 [WS-CLEANUP] done 00:00:08.835 [Pipeline] setCustomBuildProperty 00:00:08.852 [Pipeline] sh 00:00:09.133 + sudo git config --global --replace-all safe.directory '*' 00:00:09.211 [Pipeline] httpRequest 00:00:09.591 [Pipeline] echo 00:00:09.592 Sorcerer 10.211.164.20 is alive 00:00:09.601 [Pipeline] retry 00:00:09.603 [Pipeline] { 00:00:09.616 [Pipeline] httpRequest 00:00:09.619 HttpMethod: GET 00:00:09.620 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.620 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.639 Response Code: HTTP/1.1 200 OK 00:00:09.639 Success: Status code 200 is in the accepted range: 200,404 00:00:09.640 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.622 [Pipeline] } 00:00:14.639 [Pipeline] // retry 00:00:14.646 [Pipeline] sh 00:00:14.932 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.946 [Pipeline] httpRequest 00:00:15.280 [Pipeline] echo 00:00:15.282 Sorcerer 10.211.164.20 is alive 00:00:15.290 [Pipeline] retry 00:00:15.292 [Pipeline] { 00:00:15.305 [Pipeline] httpRequest 00:00:15.310 HttpMethod: GET 00:00:15.310 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:15.311 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:15.314 Response Code: HTTP/1.1 200 OK 00:00:15.315 Success: Status code 200 is in the accepted range: 200,404 00:00:15.315 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:45.139 [Pipeline] } 00:00:45.158 [Pipeline] // retry 00:00:45.166 [Pipeline] sh 00:00:45.452 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:48.007 [Pipeline] sh 00:00:48.293 + git -C spdk log --oneline -n5 00:00:48.293 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:48.293 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:48.293 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:48.293 66289a6db build: use VERSION file for storing version 00:00:48.293 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:48.312 [Pipeline] withCredentials 00:00:48.322 > git --version # timeout=10 00:00:48.335 > git --version # 'git version 2.39.2' 00:00:48.351 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:48.353 [Pipeline] { 00:00:48.362 [Pipeline] retry 00:00:48.364 [Pipeline] { 00:00:48.379 [Pipeline] sh 00:00:48.663 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:48.935 [Pipeline] } 00:00:48.953 [Pipeline] // retry 00:00:48.958 [Pipeline] } 00:00:48.975 [Pipeline] // withCredentials 00:00:48.984 [Pipeline] httpRequest 00:00:49.378 [Pipeline] echo 00:00:49.380 Sorcerer 10.211.164.20 is alive 00:00:49.390 [Pipeline] retry 00:00:49.392 [Pipeline] { 00:00:49.406 [Pipeline] httpRequest 00:00:49.411 HttpMethod: GET 00:00:49.411 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:49.412 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:49.429 Response Code: HTTP/1.1 200 OK 00:00:49.429 Success: Status code 200 is in the accepted range: 200,404 00:00:49.430 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.316 [Pipeline] } 00:01:25.334 [Pipeline] // retry 00:01:25.342 [Pipeline] sh 00:01:25.632 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.020 [Pipeline] sh 00:01:27.305 + git -C dpdk log --oneline -n5 00:01:27.305 eeb0605f11 version: 23.11.0 00:01:27.305 238778122a doc: update release notes for 23.11 00:01:27.305 46aa6b3cfc doc: fix description of RSS features 00:01:27.305 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:27.305 7e421ae345 devtools: support skipping forbid rule check 00:01:27.314 [Pipeline] } 00:01:27.328 [Pipeline] // stage 00:01:27.338 [Pipeline] stage 00:01:27.340 [Pipeline] { (Prepare) 00:01:27.359 [Pipeline] writeFile 00:01:27.374 [Pipeline] sh 00:01:27.659 + logger -p user.info -t JENKINS-CI 00:01:27.670 [Pipeline] sh 00:01:27.952 + logger -p user.info -t JENKINS-CI 00:01:27.963 [Pipeline] sh 00:01:28.247 + cat autorun-spdk.conf 00:01:28.247 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.247 SPDK_TEST_NVMF=1 00:01:28.247 SPDK_TEST_NVME_CLI=1 00:01:28.247 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.247 SPDK_TEST_NVMF_NICS=e810 00:01:28.247 SPDK_TEST_VFIOUSER=1 00:01:28.247 SPDK_RUN_UBSAN=1 00:01:28.247 NET_TYPE=phy 00:01:28.247 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.247 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.255 RUN_NIGHTLY=1 00:01:28.258 [Pipeline] readFile 00:01:28.278 [Pipeline] withEnv 00:01:28.279 [Pipeline] { 00:01:28.288 [Pipeline] sh 00:01:28.572 + set -ex 00:01:28.572 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:28.572 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.572 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.572 ++ SPDK_TEST_NVMF=1 00:01:28.572 ++ SPDK_TEST_NVME_CLI=1 00:01:28.572 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.572 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.572 ++ SPDK_TEST_VFIOUSER=1 00:01:28.572 ++ SPDK_RUN_UBSAN=1 00:01:28.572 ++ NET_TYPE=phy 00:01:28.572 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.572 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.572 ++ RUN_NIGHTLY=1 00:01:28.572 + case $SPDK_TEST_NVMF_NICS in 00:01:28.572 + DRIVERS=ice 00:01:28.572 + [[ tcp == \r\d\m\a ]] 00:01:28.572 + [[ -n ice ]] 00:01:28.572 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:28.572 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:28.572 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:28.572 rmmod: ERROR: Module i40iw is not currently loaded 00:01:28.572 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:28.572 + true 00:01:28.572 + for D in $DRIVERS 00:01:28.572 + sudo modprobe ice 00:01:28.572 + exit 0 00:01:28.581 [Pipeline] } 00:01:28.592 [Pipeline] // withEnv 00:01:28.597 [Pipeline] } 00:01:28.609 [Pipeline] // stage 00:01:28.617 [Pipeline] catchError 00:01:28.618 [Pipeline] { 00:01:28.631 [Pipeline] timeout 00:01:28.631 Timeout set to expire in 1 hr 0 min 00:01:28.632 [Pipeline] { 00:01:28.644 [Pipeline] stage 00:01:28.646 [Pipeline] { (Tests) 00:01:28.659 [Pipeline] sh 00:01:28.943 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.943 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.943 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.943 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.943 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.943 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.943 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.943 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.943 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.943 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.943 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.943 + source /etc/os-release 00:01:28.943 ++ NAME='Fedora Linux' 00:01:28.943 ++ VERSION='39 (Cloud Edition)' 00:01:28.943 ++ ID=fedora 00:01:28.943 ++ VERSION_ID=39 00:01:28.943 ++ VERSION_CODENAME= 00:01:28.943 ++ PLATFORM_ID=platform:f39 00:01:28.943 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.943 ++ LOGO=fedora-logo-icon 00:01:28.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.943 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.943 ++ SUPPORT_END=2024-11-12 00:01:28.943 ++ VARIANT='Cloud Edition' 00:01:28.943 ++ VARIANT_ID=cloud 00:01:28.943 + uname -a 00:01:28.943 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:28.943 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:31.480 Hugepages 00:01:31.480 node hugesize free / total 00:01:31.480 node0 1048576kB 0 / 0 00:01:31.480 node0 2048kB 0 / 0 00:01:31.480 node1 1048576kB 0 / 0 00:01:31.480 node1 2048kB 0 / 0 00:01:31.480 00:01:31.480 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.480 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:31.480 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:31.480 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:31.480 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:31.480 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:31.480 + rm -f /tmp/spdk-ld-path 00:01:31.480 + source autorun-spdk.conf 00:01:31.480 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.480 ++ SPDK_TEST_NVMF=1 00:01:31.480 ++ SPDK_TEST_NVME_CLI=1 00:01:31.480 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.480 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.480 ++ SPDK_TEST_VFIOUSER=1 00:01:31.480 ++ SPDK_RUN_UBSAN=1 00:01:31.480 ++ NET_TYPE=phy 00:01:31.480 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.480 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.480 ++ RUN_NIGHTLY=1 00:01:31.480 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.480 + [[ -n '' ]] 00:01:31.480 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.480 + for M in /var/spdk/build-*-manifest.txt 00:01:31.480 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:31.480 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.480 + for M in /var/spdk/build-*-manifest.txt 00:01:31.480 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.480 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.480 + for M in /var/spdk/build-*-manifest.txt 00:01:31.480 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.480 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.480 ++ uname 00:01:31.480 + [[ Linux == \L\i\n\u\x ]] 00:01:31.480 + sudo dmesg -T 00:01:31.480 + sudo dmesg --clear 00:01:31.740 + dmesg_pid=675141 00:01:31.740 + [[ Fedora Linux == FreeBSD ]] 00:01:31.740 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.740 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.740 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.740 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.740 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.740 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.740 + sudo dmesg -Tw 00:01:31.740 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.740 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.740 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.740 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.740 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.740 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.740 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.740 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.740 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.740 12:41:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.740 12:41:39 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.740 12:41:39 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:31.740 12:41:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:31.740 12:41:39 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.740 12:41:39 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.740 12:41:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:31.740 12:41:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.740 12:41:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.740 12:41:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.740 12:41:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.740 12:41:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.740 12:41:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.740 12:41:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.740 12:41:39 -- paths/export.sh@5 -- $ export PATH 00:01:31.740 12:41:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.740 12:41:39 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:31.740 12:41:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:31.740 12:41:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734262899.XXXXXX 00:01:31.740 12:41:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734262899.kPxdqR 00:01:31.740 12:41:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:31.740 12:41:39 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:01:31.740 12:41:39 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.740 12:41:39 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:31.740 12:41:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:31.740 12:41:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.740 12:41:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:31.740 12:41:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:31.740 12:41:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.740 12:41:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:31.740 12:41:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:31.740 12:41:39 -- pm/common@17 -- $ local monitor 00:01:31.740 12:41:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.740 12:41:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.740 12:41:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.740 12:41:39 -- pm/common@21 -- $ date +%s 00:01:31.740 12:41:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.740 12:41:39 -- pm/common@21 -- $ date +%s 00:01:31.740 12:41:39 -- pm/common@25 -- $ sleep 1 00:01:31.740 12:41:39 -- pm/common@21 -- $ date +%s 00:01:31.740 12:41:39 -- pm/common@21 -- $ date +%s 00:01:31.740 12:41:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734262899 00:01:31.740 12:41:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734262899 00:01:31.740 12:41:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734262899 00:01:31.740 12:41:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734262899 00:01:31.740 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734262899_collect-cpu-load.pm.log 00:01:31.740 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734262899_collect-vmstat.pm.log 00:01:31.740 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734262899_collect-cpu-temp.pm.log 00:01:32.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734262899_collect-bmc-pm.bmc.pm.log 00:01:32.938 12:41:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:32.938 12:41:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.938 12:41:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.938 12:41:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.938 12:41:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.938 Sun Dec 15 11:41:40 AM UTC 2024 00:01:32.938 12:41:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.938 v25.01-rc1-2-ge01cb43b8 00:01:32.938 12:41:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:32.938 12:41:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.938 12:41:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.938 12:41:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.938 12:41:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.938 12:41:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.938 ************************************ 00:01:32.938 START TEST ubsan 00:01:32.938 ************************************ 00:01:32.938 12:41:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:32.938 using ubsan 00:01:32.938 00:01:32.938 real 0m0.000s 00:01:32.938 user 0m0.000s 00:01:32.938 sys 0m0.000s 00:01:32.938 12:41:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.938 12:41:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.938 ************************************ 00:01:32.938 END TEST ubsan 00:01:32.938 ************************************ 00:01:32.938 12:41:40 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:32.938 12:41:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:32.938 12:41:40 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:32.938 12:41:40 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:32.938 12:41:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.938 12:41:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.938 ************************************ 00:01:32.938 START TEST build_native_dpdk 00:01:32.938 ************************************ 00:01:32.938 12:41:40 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:32.938 eeb0605f11 version: 23.11.0 00:01:32.938 238778122a doc: update release notes for 23.11 00:01:32.938 46aa6b3cfc doc: fix description of RSS features 00:01:32.938 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:32.938 7e421ae345 devtools: support skipping forbid rule check 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:32.938 12:41:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:32.939 patching file config/rte_config.h 00:01:32.939 Hunk #1 succeeded at 60 (offset 1 line). 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:32.939 patching file lib/pcapng/rte_pcapng.c 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:32.939 12:41:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:32.939 12:41:40 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:38.213 The Meson build system 00:01:38.213 Version: 1.5.0 00:01:38.213 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:38.213 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:38.213 Build type: native build 00:01:38.213 Program cat found: YES (/usr/bin/cat) 00:01:38.213 Project name: DPDK 00:01:38.213 Project version: 23.11.0 00:01:38.213 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.213 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:38.213 Host machine cpu family: x86_64 00:01:38.213 Host machine cpu: x86_64 00:01:38.213 Message: ## Building in Developer Mode ## 00:01:38.213 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.213 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:38.213 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.213 Program python3 found: YES (/usr/bin/python3) 00:01:38.213 Program cat found: YES (/usr/bin/cat) 00:01:38.213 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:38.213 Compiler for C supports arguments -march=native: YES 00:01:38.213 Checking for size of "void *" : 8 00:01:38.213 Checking for size of "void *" : 8 (cached) 00:01:38.213 Library m found: YES 00:01:38.213 Library numa found: YES 00:01:38.213 Has header "numaif.h" : YES 00:01:38.213 Library fdt found: NO 00:01:38.213 Library execinfo found: NO 00:01:38.213 Has header "execinfo.h" : YES 00:01:38.213 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.213 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.213 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.213 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.213 Run-time dependency openssl found: YES 3.1.1 00:01:38.213 Run-time dependency libpcap found: YES 1.10.4 00:01:38.213 Has header "pcap.h" with dependency libpcap: YES 00:01:38.213 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.213 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.213 Compiler for C supports arguments -Wformat: YES 00:01:38.213 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:38.213 Compiler for C supports arguments -Wformat-security: NO 00:01:38.213 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.213 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.213 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.213 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.213 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.213 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.213 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.213 Compiler for C supports arguments -Wundef: YES 00:01:38.213 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.213 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.213 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:38.213 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.213 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:38.213 Program objdump found: YES (/usr/bin/objdump) 00:01:38.213 Compiler for C supports arguments -mavx512f: YES 00:01:38.213 Checking if "AVX512 checking" compiles: YES 00:01:38.213 Fetching value of define "__SSE4_2__" : 1 00:01:38.213 Fetching value of define "__AES__" : 1 00:01:38.213 Fetching value of define "__AVX__" : 1 00:01:38.213 Fetching value of define "__AVX2__" : 1 00:01:38.213 Fetching value of define "__AVX512BW__" : 1 00:01:38.213 Fetching value of define "__AVX512CD__" : 1 00:01:38.213 Fetching value of define "__AVX512DQ__" : 1 00:01:38.213 Fetching value of define "__AVX512F__" : 1 00:01:38.213 Fetching value of define "__AVX512VL__" : 1 00:01:38.213 Fetching value of define "__PCLMUL__" : 1 00:01:38.213 Fetching value of define "__RDRND__" : 1 00:01:38.213 Fetching value of define "__RDSEED__" : 1 00:01:38.213 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:38.213 Fetching value of define "__znver1__" : (undefined) 00:01:38.213 Fetching value of define "__znver2__" : (undefined) 00:01:38.213 Fetching value of define "__znver3__" : (undefined) 00:01:38.213 Fetching value of define "__znver4__" : (undefined) 00:01:38.213 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:38.213 Message: lib/log: Defining dependency "log" 00:01:38.213 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.213 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.213 Checking for function "getentropy" : NO 00:01:38.213 Message: lib/eal: Defining dependency "eal" 00:01:38.213 Message: lib/ring: Defining dependency "ring" 00:01:38.213 Message: lib/rcu: Defining dependency "rcu" 00:01:38.213 Message: lib/mempool: Defining dependency "mempool" 00:01:38.213 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.213 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.213 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:38.213 Compiler for C supports arguments -mpclmul: YES 00:01:38.213 Compiler for C supports arguments -maes: YES 00:01:38.213 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.213 Compiler for C supports arguments -mavx512bw: YES 00:01:38.213 Compiler for C supports arguments -mavx512dq: YES 00:01:38.213 Compiler for C supports arguments -mavx512vl: YES 00:01:38.213 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.213 Compiler for C supports arguments -mavx2: YES 00:01:38.213 Compiler for C supports arguments -mavx: YES 00:01:38.213 Message: lib/net: Defining dependency "net" 00:01:38.213 Message: lib/meter: Defining dependency "meter" 00:01:38.213 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.213 Message: lib/pci: Defining dependency "pci" 00:01:38.213 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.213 Message: lib/metrics: Defining dependency "metrics" 00:01:38.213 Message: lib/hash: Defining dependency "hash" 00:01:38.213 Message: lib/timer: Defining dependency "timer" 00:01:38.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.213 Message: lib/acl: Defining dependency "acl" 00:01:38.213 Message: lib/bbdev: Defining dependency "bbdev" 00:01:38.213 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:38.213 Run-time dependency libelf found: YES 0.191 00:01:38.213 Message: lib/bpf: Defining dependency "bpf" 00:01:38.213 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:38.213 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.213 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.213 Message: lib/distributor: Defining dependency "distributor" 00:01:38.213 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.213 Message: lib/efd: Defining dependency "efd" 00:01:38.213 Message: lib/eventdev: Defining dependency "eventdev" 00:01:38.213 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:38.213 Message: lib/gpudev: Defining dependency "gpudev" 00:01:38.213 Message: lib/gro: Defining dependency "gro" 00:01:38.213 Message: lib/gso: Defining dependency "gso" 00:01:38.213 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:38.213 Message: lib/jobstats: Defining dependency "jobstats" 00:01:38.213 Message: lib/latencystats: Defining dependency "latencystats" 00:01:38.213 Message: lib/lpm: Defining dependency "lpm" 00:01:38.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:38.213 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:38.213 Message: lib/member: Defining dependency "member" 00:01:38.213 Message: lib/pcapng: Defining dependency "pcapng" 00:01:38.213 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.213 Message: lib/power: Defining dependency "power" 00:01:38.213 Message: lib/rawdev: Defining dependency "rawdev" 00:01:38.213 Message: lib/regexdev: Defining dependency "regexdev" 00:01:38.213 Message: lib/mldev: Defining dependency "mldev" 00:01:38.213 Message: lib/rib: Defining dependency "rib" 00:01:38.213 Message: lib/reorder: Defining dependency "reorder" 00:01:38.213 Message: lib/sched: Defining dependency "sched" 00:01:38.213 Message: lib/security: Defining dependency "security" 00:01:38.213 Message: lib/stack: Defining dependency "stack" 00:01:38.213 Has header "linux/userfaultfd.h" : YES 00:01:38.213 Has header "linux/vduse.h" : YES 00:01:38.213 Message: lib/vhost: Defining dependency "vhost" 00:01:38.213 Message: lib/ipsec: Defining dependency "ipsec" 00:01:38.213 Message: lib/pdcp: Defining dependency "pdcp" 00:01:38.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.213 Message: lib/fib: Defining dependency "fib" 00:01:38.213 Message: lib/port: Defining dependency "port" 00:01:38.213 Message: lib/pdump: Defining dependency "pdump" 00:01:38.213 Message: lib/table: Defining dependency "table" 00:01:38.213 Message: lib/pipeline: Defining dependency "pipeline" 00:01:38.213 Message: lib/graph: Defining dependency "graph" 00:01:38.213 Message: lib/node: Defining dependency "node" 00:01:38.213 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.161 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.161 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.161 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:39.161 Compiler for C supports arguments -Wno-unused-value: YES 00:01:39.161 Compiler for C supports arguments -Wno-format: YES 00:01:39.161 Compiler for C supports arguments -Wno-format-security: YES 00:01:39.161 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:39.161 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:39.161 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:39.161 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:39.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.161 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:39.161 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:39.161 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:39.161 Has header "sys/epoll.h" : YES 00:01:39.161 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:39.161 Configuring doxy-api-html.conf using configuration 00:01:39.161 Configuring doxy-api-man.conf using configuration 00:01:39.161 Program mandb found: YES (/usr/bin/mandb) 00:01:39.161 Program sphinx-build found: NO 00:01:39.161 Configuring rte_build_config.h using configuration 00:01:39.161 Message: 00:01:39.161 ================= 00:01:39.161 Applications Enabled 00:01:39.161 ================= 00:01:39.161 00:01:39.161 apps: 00:01:39.161 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:39.161 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:39.161 test-pmd, test-regex, test-sad, test-security-perf, 00:01:39.161 00:01:39.161 Message: 00:01:39.161 ================= 00:01:39.161 Libraries Enabled 00:01:39.161 ================= 00:01:39.161 00:01:39.161 libs: 00:01:39.161 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.161 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:39.161 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:39.161 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:39.161 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:39.161 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:39.161 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:39.161 00:01:39.161 00:01:39.161 Message: 00:01:39.161 =============== 00:01:39.161 Drivers Enabled 00:01:39.161 =============== 00:01:39.161 00:01:39.161 common: 00:01:39.161 00:01:39.161 bus: 00:01:39.161 pci, vdev, 00:01:39.161 mempool: 00:01:39.161 ring, 00:01:39.161 dma: 00:01:39.161 00:01:39.161 net: 00:01:39.161 i40e, 00:01:39.161 raw: 00:01:39.161 00:01:39.161 crypto: 00:01:39.161 00:01:39.161 compress: 00:01:39.161 00:01:39.161 regex: 00:01:39.161 00:01:39.161 ml: 00:01:39.161 00:01:39.161 vdpa: 00:01:39.161 00:01:39.161 event: 00:01:39.161 00:01:39.161 baseband: 00:01:39.161 00:01:39.161 gpu: 00:01:39.161 00:01:39.161 00:01:39.161 Message: 00:01:39.161 ================= 00:01:39.161 Content Skipped 00:01:39.161 ================= 00:01:39.161 00:01:39.161 apps: 00:01:39.161 00:01:39.161 libs: 00:01:39.161 00:01:39.161 drivers: 00:01:39.161 common/cpt: not in enabled drivers build config 00:01:39.161 common/dpaax: not in enabled drivers build config 00:01:39.161 common/iavf: not in enabled drivers build config 00:01:39.161 common/idpf: not in enabled drivers build config 00:01:39.161 common/mvep: not in enabled drivers build config 00:01:39.161 common/octeontx: not in enabled drivers build config 00:01:39.161 bus/auxiliary: not in enabled drivers build config 00:01:39.161 bus/cdx: not in enabled drivers build config 00:01:39.161 bus/dpaa: not in enabled drivers build config 00:01:39.161 bus/fslmc: not in enabled drivers build config 00:01:39.161 bus/ifpga: not in enabled drivers build config 00:01:39.161 bus/platform: not in enabled drivers build config 00:01:39.161 bus/vmbus: not in enabled drivers build config 00:01:39.161 common/cnxk: not in enabled drivers build config 00:01:39.161 common/mlx5: not in enabled drivers build config 00:01:39.161 common/nfp: not in enabled drivers build config 00:01:39.161 common/qat: not in enabled drivers build config 00:01:39.161 common/sfc_efx: not in enabled drivers build config 00:01:39.161 mempool/bucket: not in enabled drivers build config 00:01:39.161 mempool/cnxk: not in enabled drivers build config 00:01:39.161 mempool/dpaa: not in enabled drivers build config 00:01:39.161 mempool/dpaa2: not in enabled drivers build config 00:01:39.161 mempool/octeontx: not in enabled drivers build config 00:01:39.161 mempool/stack: not in enabled drivers build config 00:01:39.161 dma/cnxk: not in enabled drivers build config 00:01:39.161 dma/dpaa: not in enabled drivers build config 00:01:39.161 dma/dpaa2: not in enabled drivers build config 00:01:39.161 dma/hisilicon: not in enabled drivers build config 00:01:39.161 dma/idxd: not in enabled drivers build config 00:01:39.161 dma/ioat: not in enabled drivers build config 00:01:39.161 dma/skeleton: not in enabled drivers build config 00:01:39.161 net/af_packet: not in enabled drivers build config 00:01:39.161 net/af_xdp: not in enabled drivers build config 00:01:39.161 net/ark: not in enabled drivers build config 00:01:39.161 net/atlantic: not in enabled drivers build config 00:01:39.161 net/avp: not in enabled drivers build config 00:01:39.161 net/axgbe: not in enabled drivers build config 00:01:39.161 net/bnx2x: not in enabled drivers build config 00:01:39.161 net/bnxt: not in enabled drivers build config 00:01:39.161 net/bonding: not in enabled drivers build config 00:01:39.161 net/cnxk: not in enabled drivers build config 00:01:39.161 net/cpfl: not in enabled drivers build config 00:01:39.161 net/cxgbe: not in enabled drivers build config 00:01:39.161 net/dpaa: not in enabled drivers build config 00:01:39.161 net/dpaa2: not in enabled drivers build config 00:01:39.161 net/e1000: not in enabled drivers build config 00:01:39.162 net/ena: not in enabled drivers build config 00:01:39.162 net/enetc: not in enabled drivers build config 00:01:39.162 net/enetfec: not in enabled drivers build config 00:01:39.162 net/enic: not in enabled drivers build config 00:01:39.162 net/failsafe: not in enabled drivers build config 00:01:39.162 net/fm10k: not in enabled drivers build config 00:01:39.162 net/gve: not in enabled drivers build config 00:01:39.162 net/hinic: not in enabled drivers build config 00:01:39.162 net/hns3: not in enabled drivers build config 00:01:39.162 net/iavf: not in enabled drivers build config 00:01:39.162 net/ice: not in enabled drivers build config 00:01:39.162 net/idpf: not in enabled drivers build config 00:01:39.162 net/igc: not in enabled drivers build config 00:01:39.162 net/ionic: not in enabled drivers build config 00:01:39.162 net/ipn3ke: not in enabled drivers build config 00:01:39.162 net/ixgbe: not in enabled drivers build config 00:01:39.162 net/mana: not in enabled drivers build config 00:01:39.162 net/memif: not in enabled drivers build config 00:01:39.162 net/mlx4: not in enabled drivers build config 00:01:39.162 net/mlx5: not in enabled drivers build config 00:01:39.162 net/mvneta: not in enabled drivers build config 00:01:39.162 net/mvpp2: not in enabled drivers build config 00:01:39.162 net/netvsc: not in enabled drivers build config 00:01:39.162 net/nfb: not in enabled drivers build config 00:01:39.162 net/nfp: not in enabled drivers build config 00:01:39.162 net/ngbe: not in enabled drivers build config 00:01:39.162 net/null: not in enabled drivers build config 00:01:39.162 net/octeontx: not in enabled drivers build config 00:01:39.162 net/octeon_ep: not in enabled drivers build config 00:01:39.162 net/pcap: not in enabled drivers build config 00:01:39.162 net/pfe: not in enabled drivers build config 00:01:39.162 net/qede: not in enabled drivers build config 00:01:39.162 net/ring: not in enabled drivers build config 00:01:39.162 net/sfc: not in enabled drivers build config 00:01:39.162 net/softnic: not in enabled drivers build config 00:01:39.162 net/tap: not in enabled drivers build config 00:01:39.162 net/thunderx: not in enabled drivers build config 00:01:39.162 net/txgbe: not in enabled drivers build config 00:01:39.162 net/vdev_netvsc: not in enabled drivers build config 00:01:39.162 net/vhost: not in enabled drivers build config 00:01:39.162 net/virtio: not in enabled drivers build config 00:01:39.162 net/vmxnet3: not in enabled drivers build config 00:01:39.162 raw/cnxk_bphy: not in enabled drivers build config 00:01:39.162 raw/cnxk_gpio: not in enabled drivers build config 00:01:39.162 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:39.162 raw/ifpga: not in enabled drivers build config 00:01:39.162 raw/ntb: not in enabled drivers build config 00:01:39.162 raw/skeleton: not in enabled drivers build config 00:01:39.162 crypto/armv8: not in enabled drivers build config 00:01:39.162 crypto/bcmfs: not in enabled drivers build config 00:01:39.162 crypto/caam_jr: not in enabled drivers build config 00:01:39.162 crypto/ccp: not in enabled drivers build config 00:01:39.162 crypto/cnxk: not in enabled drivers build config 00:01:39.162 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.162 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.162 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.162 crypto/mlx5: not in enabled drivers build config 00:01:39.162 crypto/mvsam: not in enabled drivers build config 00:01:39.162 crypto/nitrox: not in enabled drivers build config 00:01:39.162 crypto/null: not in enabled drivers build config 00:01:39.162 crypto/octeontx: not in enabled drivers build config 00:01:39.162 crypto/openssl: not in enabled drivers build config 00:01:39.162 crypto/scheduler: not in enabled drivers build config 00:01:39.162 crypto/uadk: not in enabled drivers build config 00:01:39.162 crypto/virtio: not in enabled drivers build config 00:01:39.162 compress/isal: not in enabled drivers build config 00:01:39.162 compress/mlx5: not in enabled drivers build config 00:01:39.162 compress/octeontx: not in enabled drivers build config 00:01:39.162 compress/zlib: not in enabled drivers build config 00:01:39.162 regex/mlx5: not in enabled drivers build config 00:01:39.162 regex/cn9k: not in enabled drivers build config 00:01:39.162 ml/cnxk: not in enabled drivers build config 00:01:39.162 vdpa/ifc: not in enabled drivers build config 00:01:39.162 vdpa/mlx5: not in enabled drivers build config 00:01:39.162 vdpa/nfp: not in enabled drivers build config 00:01:39.162 vdpa/sfc: not in enabled drivers build config 00:01:39.162 event/cnxk: not in enabled drivers build config 00:01:39.162 event/dlb2: not in enabled drivers build config 00:01:39.162 event/dpaa: not in enabled drivers build config 00:01:39.162 event/dpaa2: not in enabled drivers build config 00:01:39.162 event/dsw: not in enabled drivers build config 00:01:39.162 event/opdl: not in enabled drivers build config 00:01:39.162 event/skeleton: not in enabled drivers build config 00:01:39.162 event/sw: not in enabled drivers build config 00:01:39.162 event/octeontx: not in enabled drivers build config 00:01:39.162 baseband/acc: not in enabled drivers build config 00:01:39.162 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:39.162 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:39.162 baseband/la12xx: not in enabled drivers build config 00:01:39.162 baseband/null: not in enabled drivers build config 00:01:39.162 baseband/turbo_sw: not in enabled drivers build config 00:01:39.162 gpu/cuda: not in enabled drivers build config 00:01:39.162 00:01:39.162 00:01:39.162 Build targets in project: 217 00:01:39.162 00:01:39.162 DPDK 23.11.0 00:01:39.162 00:01:39.162 User defined options 00:01:39.162 libdir : lib 00:01:39.162 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.162 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:39.162 c_link_args : 00:01:39.162 enable_docs : false 00:01:39.162 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:39.162 enable_kmods : false 00:01:39.162 machine : native 00:01:39.162 tests : false 00:01:39.162 00:01:39.162 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.162 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:39.162 12:41:46 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:39.162 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:39.162 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.162 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.162 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.162 [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.162 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.162 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.162 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.427 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.427 [9/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.427 [10/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.427 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.427 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.427 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.427 [14/707] Linking static target lib/librte_kvargs.a 00:01:39.427 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.427 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.427 [17/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.427 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.427 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.427 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.427 [21/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.427 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.427 [23/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.427 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.427 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.427 [26/707] Linking static target lib/librte_pci.a 00:01:39.427 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.427 [28/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.427 [29/707] Linking static target lib/librte_log.a 00:01:39.427 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.686 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.686 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.686 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.686 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.686 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.686 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.686 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.686 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.686 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.686 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.951 [41/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.951 [42/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.951 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.951 [44/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.951 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.951 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.951 [47/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.951 [48/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.951 [49/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.951 [50/707] Linking static target lib/librte_meter.a 00:01:39.951 [51/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.951 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.951 [53/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.951 [54/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.951 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.951 [56/707] Linking static target lib/librte_ring.a 00:01:39.951 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.951 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.951 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.951 [60/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.951 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.951 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.951 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.951 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.951 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.951 [66/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.951 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.951 [68/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.951 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.951 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.951 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.951 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.951 [73/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.951 [74/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.952 [75/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.218 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.218 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.218 [78/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.218 [79/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.218 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.218 [81/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.218 [82/707] Linking static target lib/librte_cmdline.a 00:01:40.218 [83/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.218 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.218 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.218 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.218 [87/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.218 [88/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.218 [89/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.218 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.218 [91/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.218 [92/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:40.218 [93/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.218 [94/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.218 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.218 [96/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:40.218 [97/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.218 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.218 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.218 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.218 [101/707] Linking static target lib/librte_metrics.a 00:01:40.218 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.218 [103/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.218 [104/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.218 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.218 [106/707] Linking static target lib/librte_net.a 00:01:40.218 [107/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.477 [108/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.477 [109/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.477 [110/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.477 [111/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.477 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.477 [113/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.477 [114/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:40.477 [115/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.477 [116/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.477 [117/707] Linking target lib/librte_log.so.24.0 00:01:40.477 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.477 [119/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:40.477 [120/707] Linking static target lib/librte_cfgfile.a 00:01:40.477 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.477 [122/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:40.477 [123/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.477 [124/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:40.477 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:40.477 [126/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.477 [127/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:40.478 [128/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.478 [129/707] Linking static target lib/librte_bitratestats.a 00:01:40.742 [130/707] Linking static target lib/librte_mempool.a 00:01:40.742 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.742 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.742 [133/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:40.742 [134/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.742 [135/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.742 [136/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.742 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:40.742 [138/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:40.742 [139/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.742 [140/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.742 [141/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.742 [142/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.742 [143/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:40.742 [144/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:40.742 [145/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:40.742 [146/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.742 [147/707] Linking target lib/librte_kvargs.so.24.0 00:01:40.742 [148/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:40.742 [149/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.742 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.742 [151/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.008 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.008 [153/707] Linking static target lib/librte_timer.a 00:01:41.008 [154/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:41.008 [155/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.008 [156/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:41.008 [157/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:41.008 [158/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:41.008 [159/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:41.008 [160/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.008 [161/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:41.008 [162/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:41.008 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:41.008 [164/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:41.008 [165/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:41.008 [166/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:41.008 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.008 [168/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:41.008 [169/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:41.008 [170/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:41.008 [171/707] Linking static target lib/librte_bbdev.a 00:01:41.008 [172/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:41.008 [173/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.008 [174/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.008 [175/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:41.008 [176/707] Linking static target lib/librte_compressdev.a 00:01:41.008 [177/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:41.008 [178/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:41.008 [179/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:41.008 [180/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.267 [181/707] Linking static target lib/librte_jobstats.a 00:01:41.267 [182/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.267 [183/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:41.267 [184/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.267 [185/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:41.267 [186/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:41.267 [187/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:41.267 [188/707] Linking static target lib/librte_latencystats.a 00:01:41.267 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:41.267 [190/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:41.267 [191/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:41.267 [192/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.267 [193/707] Linking static target lib/librte_dispatcher.a 00:01:41.267 [194/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:41.267 [195/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:41.267 [196/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.267 [197/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:41.267 [198/707] Linking static target lib/librte_dmadev.a 00:01:41.267 [199/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:41.529 [200/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.529 [201/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:41.529 [202/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:41.529 [203/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:41.529 [204/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:41.529 [205/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:41.529 [206/707] Linking static target lib/librte_gro.a 00:01:41.529 [207/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:41.529 [208/707] Linking static target lib/librte_gpudev.a 00:01:41.529 [209/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:41.529 [210/707] Linking static target lib/librte_rcu.a 00:01:41.530 [211/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.530 [212/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:41.530 [213/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.530 [214/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.530 [215/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.530 [216/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.530 [217/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:41.530 [218/707] Linking static target lib/librte_telemetry.a 00:01:41.530 [219/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:41.530 [220/707] Linking static target lib/librte_eal.a 00:01:41.530 [221/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:41.530 [222/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:41.530 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:41.530 [224/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:41.530 [225/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.530 [226/707] Linking static target lib/librte_ip_frag.a 00:01:41.530 [227/707] Linking static target lib/librte_distributor.a 00:01:41.530 [228/707] Linking static target lib/librte_gso.a 00:01:41.530 [229/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.530 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:41.530 [231/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:41.530 [232/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:41.530 [233/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.530 [234/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:41.794 [235/707] Linking static target lib/librte_stack.a 00:01:41.794 [236/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:41.794 [237/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.794 [238/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.794 [239/707] Linking static target lib/librte_regexdev.a 00:01:41.794 [240/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:41.794 [241/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.794 [242/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.794 [243/707] Linking static target lib/librte_mbuf.a 00:01:41.794 [244/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:41.794 [245/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.794 [246/707] Linking static target lib/librte_mldev.a 00:01:41.794 [247/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:41.794 [248/707] Linking static target lib/librte_pcapng.a 00:01:41.794 [249/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:41.794 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:41.794 [251/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:41.794 [252/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:41.794 [253/707] Linking static target lib/librte_bpf.a 00:01:41.794 [254/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.794 [255/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:41.794 [256/707] Linking static target lib/librte_rawdev.a 00:01:41.794 [257/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:42.068 [258/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:42.068 [259/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:42.068 [260/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [261/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [262/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:42.068 [263/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:42.068 [264/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.068 [265/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [266/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.068 [267/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:42.068 [268/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.068 [269/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.068 [270/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [271/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:42.068 [272/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.068 [273/707] Linking static target lib/librte_power.a 00:01:42.068 [274/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [275/707] Linking static target lib/librte_reorder.a 00:01:42.068 [276/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.068 [277/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [278/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [279/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:42.068 [280/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.068 [281/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:42.068 [282/707] Linking static target lib/librte_lpm.a 00:01:42.068 [283/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.068 [284/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:42.068 [285/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.330 [286/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:42.330 [287/707] Linking static target lib/librte_security.a 00:01:42.330 [288/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.330 [289/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:42.330 [290/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:42.330 [291/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:42.330 [292/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:42.330 [293/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.330 [294/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.330 [295/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.330 [296/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:42.330 [297/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:42.330 [298/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:42.330 [299/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:42.330 [300/707] Linking static target lib/librte_rib.a 00:01:42.330 [301/707] Linking target lib/librte_telemetry.so.24.0 00:01:42.330 [302/707] Linking static target lib/librte_efd.a 00:01:42.330 [303/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.330 [304/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:42.330 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:42.330 [306/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.596 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:42.596 [308/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:42.596 [309/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:42.596 [310/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:42.596 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:42.596 [312/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:42.596 [313/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:42.596 [314/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:42.596 [315/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.596 [316/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:42.596 [317/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.596 [318/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.596 [319/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.596 [320/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.596 [321/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:42.861 [322/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:42.861 [323/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:42.861 [324/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.861 [325/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.861 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:42.861 [327/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:42.861 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:42.861 [329/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:42.861 [330/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:42.861 [331/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:42.861 [332/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:42.861 [333/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:42.861 [334/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:42.861 [335/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:42.861 [336/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:42.861 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:42.861 [338/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:42.861 [339/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:42.861 [340/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:42.861 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:42.861 [342/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.861 [343/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:42.861 [344/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:43.126 [345/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.126 [346/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:43.126 [347/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:43.126 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:43.126 [349/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:43.126 [350/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:43.126 [351/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.126 [352/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:43.126 [353/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:43.126 [354/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.126 [355/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:43.126 [356/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:43.126 [357/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:43.126 [358/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:43.126 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.126 [360/707] Linking static target lib/librte_fib.a 00:01:43.126 [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:43.126 [362/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:43.126 [363/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:43.126 [364/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:43.126 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:43.390 [366/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:43.390 [367/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:43.390 [368/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:43.390 [369/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:43.390 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:43.390 [371/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:43.390 [372/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:43.390 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:43.390 [374/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:43.390 [375/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:43.390 [376/707] Linking static target lib/librte_pdump.a 00:01:43.390 [377/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:43.390 [378/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:43.390 [379/707] Linking static target lib/librte_graph.a 00:01:43.651 [380/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:43.651 [381/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:43.651 [382/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:43.651 [383/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:43.651 [384/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.651 [385/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:43.651 [386/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:43.651 [387/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:43.651 [388/707] Linking static target lib/librte_cryptodev.a 00:01:43.651 [389/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:43.651 [390/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:43.651 [391/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:43.652 [392/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:43.652 [393/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:43.652 [394/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:43.652 [395/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:43.652 [396/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:43.652 [397/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:43.920 [398/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:43.920 [399/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:43.920 [400/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:43.920 [401/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:43.920 [402/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.920 [403/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:43.920 [404/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:43.920 [405/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:43.920 [406/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:43.920 [407/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.920 [408/707] Linking static target drivers/librte_bus_vdev.a 00:01:43.920 [409/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:43.920 [410/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:43.920 [411/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:43.920 [412/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.920 [413/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.920 [414/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:43.920 [415/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:43.920 [416/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:43.920 [417/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:43.920 [418/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:43.920 [419/707] Linking static target lib/librte_member.a 00:01:43.920 [420/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:44.180 [421/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:44.180 [422/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:44.180 [423/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:44.180 [424/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:44.180 [425/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:44.180 [426/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:44.180 [427/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:44.180 [428/707] Linking static target lib/librte_sched.a 00:01:44.180 [429/707] Linking static target lib/librte_table.a 00:01:44.180 [430/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:44.180 [431/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:44.180 [432/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.180 [433/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:44.180 [434/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.180 [435/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:44.180 [436/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.180 [437/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:44.180 [438/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.180 [439/707] Linking static target drivers/librte_bus_pci.a 00:01:44.180 [440/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:44.180 [441/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:44.449 [442/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:44.449 [443/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:44.449 [444/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.449 [445/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:44.449 [446/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:44.449 [447/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:44.449 [448/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:44.449 [449/707] Linking static target lib/librte_node.a 00:01:44.449 [450/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:44.449 [451/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:44.449 [452/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.449 [453/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:44.708 [454/707] Linking static target lib/librte_ipsec.a 00:01:44.708 [455/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:44.708 [456/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.708 [457/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.708 [458/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:44.708 [459/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:44.709 [460/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.709 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:44.709 [462/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:44.709 [463/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:44.709 [464/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:44.709 [465/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:44.709 [466/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:44.709 [467/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:44.709 [468/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:44.709 [469/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:44.709 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:44.709 [471/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:44.709 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:44.709 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:44.709 [474/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:44.709 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:44.709 [476/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:44.709 [477/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:44.971 [478/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:44.971 [479/707] Linking static target lib/librte_pdcp.a 00:01:44.971 [480/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:44.971 [481/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.971 [482/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:44.971 [483/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:44.971 [484/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:44.971 [485/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:44.971 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:44.971 [487/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:44.971 [488/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:44.971 [489/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:44.971 [490/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:44.971 [491/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:44.971 [492/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:44.971 [493/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:44.971 [494/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:44.971 [495/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.230 [496/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:45.230 [497/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.230 [498/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:45.230 [499/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:45.230 [500/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.230 [501/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:45.230 [502/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.230 [503/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.230 [504/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:45.230 [505/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.230 [506/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:45.231 [507/707] Linking static target drivers/librte_mempool_ring.a 00:01:45.231 [508/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:45.231 [509/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:45.231 [510/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:45.231 [511/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.231 [512/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.231 [513/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:45.231 [514/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:45.231 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:45.231 [516/707] Linking static target lib/librte_hash.a 00:01:45.231 [517/707] Linking static target lib/librte_port.a 00:01:45.231 [518/707] Linking static target lib/librte_eventdev.a 00:01:45.231 [519/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.231 [520/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.231 [521/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:45.490 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:45.490 [523/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:45.490 [524/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:45.490 [525/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:45.490 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:45.490 [527/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:45.490 [528/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:45.490 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:45.490 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:45.490 [531/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:45.490 [532/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:45.490 [533/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:45.490 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:45.490 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:45.490 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:45.490 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:45.490 [538/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:45.490 [539/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:45.490 [540/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:45.748 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:45.748 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:45.748 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:45.748 [544/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:45.748 [545/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.748 [546/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:45.748 [547/707] Linking static target lib/acl/libavx2_tmp.a 00:01:45.748 [548/707] Linking static target lib/librte_acl.a 00:01:45.748 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:45.748 [550/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:45.748 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:45.748 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.748 [553/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:46.006 [554/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:46.006 [555/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:46.006 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:46.006 [557/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:46.006 [558/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:46.006 [559/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.006 [560/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:46.006 [561/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.006 [562/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:46.006 [563/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:46.006 [564/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:46.006 [565/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.006 [566/707] Linking static target lib/librte_ethdev.a 00:01:46.265 [567/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:46.265 [568/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:46.265 [569/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:46.265 [570/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.265 [571/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:46.265 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:46.265 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:46.524 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:46.783 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:47.042 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:47.302 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:47.302 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:47.560 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:47.560 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:48.127 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:48.127 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:48.127 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:48.385 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:48.385 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:48.385 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:48.385 [587/707] Linking static target drivers/librte_net_i40e.a 00:01:48.385 [588/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.321 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.321 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.890 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:50.148 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:52.682 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.682 [594/707] Linking target lib/librte_eal.so.24.0 00:01:52.682 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:52.682 [596/707] Linking target lib/librte_ring.so.24.0 00:01:52.682 [597/707] Linking target lib/librte_cfgfile.so.24.0 00:01:52.682 [598/707] Linking target lib/librte_pci.so.24.0 00:01:52.682 [599/707] Linking target lib/librte_jobstats.so.24.0 00:01:52.682 [600/707] Linking target lib/librte_dmadev.so.24.0 00:01:52.682 [601/707] Linking target lib/librte_timer.so.24.0 00:01:52.682 [602/707] Linking target lib/librte_meter.so.24.0 00:01:52.682 [603/707] Linking target lib/librte_stack.so.24.0 00:01:52.682 [604/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:52.682 [605/707] Linking target lib/librte_rawdev.so.24.0 00:01:52.682 [606/707] Linking target lib/librte_acl.so.24.0 00:01:52.682 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:52.682 [608/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:52.682 [609/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:52.682 [610/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:52.682 [611/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:52.682 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:52.682 [613/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:52.682 [614/707] Linking target lib/librte_rcu.so.24.0 00:01:52.682 [615/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:52.682 [616/707] Linking target lib/librte_mempool.so.24.0 00:01:52.941 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:52.941 [618/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:52.941 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:52.941 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:52.941 [621/707] Linking target lib/librte_rib.so.24.0 00:01:52.941 [622/707] Linking target lib/librte_mbuf.so.24.0 00:01:52.941 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:53.200 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:53.200 [625/707] Linking target lib/librte_compressdev.so.24.0 00:01:53.200 [626/707] Linking target lib/librte_sched.so.24.0 00:01:53.200 [627/707] Linking target lib/librte_regexdev.so.24.0 00:01:53.200 [628/707] Linking target lib/librte_distributor.so.24.0 00:01:53.200 [629/707] Linking target lib/librte_gpudev.so.24.0 00:01:53.200 [630/707] Linking target lib/librte_bbdev.so.24.0 00:01:53.200 [631/707] Linking target lib/librte_net.so.24.0 00:01:53.200 [632/707] Linking target lib/librte_cryptodev.so.24.0 00:01:53.200 [633/707] Linking target lib/librte_mldev.so.24.0 00:01:53.200 [634/707] Linking target lib/librte_reorder.so.24.0 00:01:53.200 [635/707] Linking target lib/librte_fib.so.24.0 00:01:53.200 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:53.200 [637/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:53.200 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:53.200 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:53.200 [640/707] Linking target lib/librte_hash.so.24.0 00:01:53.200 [641/707] Linking target lib/librte_cmdline.so.24.0 00:01:53.200 [642/707] Linking target lib/librte_security.so.24.0 00:01:53.459 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:53.459 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:53.459 [645/707] Linking target lib/librte_lpm.so.24.0 00:01:53.459 [646/707] Linking target lib/librte_efd.so.24.0 00:01:53.459 [647/707] Linking target lib/librte_member.so.24.0 00:01:53.459 [648/707] Linking target lib/librte_pdcp.so.24.0 00:01:53.459 [649/707] Linking target lib/librte_ipsec.so.24.0 00:01:53.459 [650/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.718 [651/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:53.718 [652/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:53.718 [653/707] Linking target lib/librte_ethdev.so.24.0 00:01:53.718 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:53.718 [655/707] Linking target lib/librte_gso.so.24.0 00:01:53.718 [656/707] Linking target lib/librte_gro.so.24.0 00:01:53.719 [657/707] Linking target lib/librte_metrics.so.24.0 00:01:53.719 [658/707] Linking target lib/librte_bpf.so.24.0 00:01:53.719 [659/707] Linking target lib/librte_pcapng.so.24.0 00:01:53.719 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:01:53.719 [661/707] Linking target lib/librte_power.so.24.0 00:01:53.719 [662/707] Linking target lib/librte_eventdev.so.24.0 00:01:53.977 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:53.977 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:53.977 [665/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:53.977 [666/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:53.977 [667/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:53.977 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:53.977 [669/707] Linking target lib/librte_latencystats.so.24.0 00:01:53.977 [670/707] Linking target lib/librte_bitratestats.so.24.0 00:01:53.977 [671/707] Linking target lib/librte_dispatcher.so.24.0 00:01:53.977 [672/707] Linking target lib/librte_pdump.so.24.0 00:01:53.977 [673/707] Linking target lib/librte_graph.so.24.0 00:01:53.977 [674/707] Linking target lib/librte_port.so.24.0 00:01:54.236 [675/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:54.236 [676/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:54.236 [677/707] Linking target lib/librte_table.so.24.0 00:01:54.236 [678/707] Linking target lib/librte_node.so.24.0 00:01:54.236 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:56.768 [680/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.768 [681/707] Linking static target lib/librte_vhost.a 00:01:56.768 [682/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:56.768 [683/707] Linking static target lib/librte_pipeline.a 00:01:57.336 [684/707] Linking target app/dpdk-dumpcap 00:01:57.336 [685/707] Linking target app/dpdk-test-crypto-perf 00:01:57.336 [686/707] Linking target app/dpdk-test-gpudev 00:01:57.336 [687/707] Linking target app/dpdk-proc-info 00:01:57.336 [688/707] Linking target app/dpdk-graph 00:01:57.336 [689/707] Linking target app/dpdk-test-dma-perf 00:01:57.336 [690/707] Linking target app/dpdk-test-mldev 00:01:57.336 [691/707] Linking target app/dpdk-test-security-perf 00:01:57.336 [692/707] Linking target app/dpdk-test-flow-perf 00:01:57.336 [693/707] Linking target app/dpdk-test-regex 00:01:57.336 [694/707] Linking target app/dpdk-test-acl 00:01:57.336 [695/707] Linking target app/dpdk-test-cmdline 00:01:57.336 [696/707] Linking target app/dpdk-test-fib 00:01:57.336 [697/707] Linking target app/dpdk-test-compress-perf 00:01:57.336 [698/707] Linking target app/dpdk-pdump 00:01:57.336 [699/707] Linking target app/dpdk-test-pipeline 00:01:57.336 [700/707] Linking target app/dpdk-test-sad 00:01:57.336 [701/707] Linking target app/dpdk-test-bbdev 00:01:57.336 [702/707] Linking target app/dpdk-test-eventdev 00:01:57.336 [703/707] Linking target app/dpdk-testpmd 00:01:58.713 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.713 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:02.003 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.003 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:02.003 12:42:09 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:02.003 12:42:09 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:02.003 12:42:09 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:02.003 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:02.003 [0/1] Installing files. 00:02:02.266 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:02.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.268 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:02.269 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.270 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.271 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:02.271 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.271 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.271 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.271 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.271 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.272 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:02.535 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:02.535 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:02.535 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:02.535 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:02.535 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.535 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.536 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.536 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:02.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:02.539 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:02.539 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:02.539 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:02.539 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:02.539 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:02.539 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:02.539 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:02.539 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:02.540 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:02.540 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:02.540 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:02.540 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:02.540 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:02.540 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:02.540 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:02.540 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:02.540 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:02.540 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:02.540 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:02.540 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:02.540 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:02.540 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:02.540 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:02.540 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:02.540 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:02.540 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:02.540 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:02.540 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:02.540 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:02.540 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:02.540 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:02.540 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:02.540 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:02.540 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:02.540 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:02.540 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:02.540 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:02.540 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:02.540 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:02.540 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:02.540 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:02.540 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:02.540 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:02.540 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:02.540 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:02.540 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:02.540 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:02.540 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:02.540 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:02.540 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:02.540 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:02.540 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:02.540 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:02.540 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:02.540 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:02.540 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:02.540 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:02.540 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:02.540 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:02.540 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:02.540 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:02.540 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:02.540 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:02.540 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:02.540 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:02.540 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:02.540 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:02.540 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:02.540 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:02.540 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:02.540 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:02.540 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:02.540 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:02.540 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:02.540 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:02.540 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:02.540 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:02.540 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:02.540 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:02.540 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:02.540 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:02.540 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:02.540 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:02.540 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:02.540 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:02.540 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:02.540 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:02.540 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:02.540 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:02.540 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:02.540 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:02.540 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:02.540 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:02.540 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:02.540 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:02.540 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:02.540 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:02.540 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:02.540 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:02.540 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:02.540 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:02.540 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:02.540 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:02.540 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:02.540 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:02.540 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:02.541 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:02.541 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:02.541 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:02.541 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:02.541 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:02.541 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:02.541 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:02.541 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:02.541 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:02.541 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:02.541 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:02.541 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:02.541 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:02.541 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:02.541 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:02.541 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:02.541 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:02.541 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:02.541 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:02.541 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:02.541 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:02.541 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:02.541 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:02.541 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:02.541 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:02.541 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:02.541 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:02.800 12:42:10 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:02.800 12:42:10 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.800 00:02:02.800 real 0m29.758s 00:02:02.800 user 9m27.213s 00:02:02.800 sys 2m10.257s 00:02:02.800 12:42:10 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:02.800 12:42:10 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:02.800 ************************************ 00:02:02.800 END TEST build_native_dpdk 00:02:02.800 ************************************ 00:02:02.800 12:42:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.800 12:42:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:02.800 12:42:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:02.800 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:03.059 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.059 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:03.059 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:03.318 Using 'verbs' RDMA provider 00:02:16.472 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:28.688 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:28.688 Creating mk/config.mk...done. 00:02:28.688 Creating mk/cc.flags.mk...done. 00:02:28.688 Type 'make' to build. 00:02:28.688 12:42:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:28.688 12:42:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.688 12:42:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.688 12:42:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.688 ************************************ 00:02:28.688 START TEST make 00:02:28.688 ************************************ 00:02:28.688 12:42:36 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:30.601 The Meson build system 00:02:30.601 Version: 1.5.0 00:02:30.601 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:30.601 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.601 Build type: native build 00:02:30.601 Project name: libvfio-user 00:02:30.601 Project version: 0.0.1 00:02:30.601 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.601 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:30.601 Host machine cpu family: x86_64 00:02:30.601 Host machine cpu: x86_64 00:02:30.601 Run-time dependency threads found: YES 00:02:30.601 Library dl found: YES 00:02:30.601 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.601 Run-time dependency json-c found: YES 0.17 00:02:30.601 Run-time dependency cmocka found: YES 1.1.7 00:02:30.601 Program pytest-3 found: NO 00:02:30.601 Program flake8 found: NO 00:02:30.601 Program misspell-fixer found: NO 00:02:30.601 Program restructuredtext-lint found: NO 00:02:30.601 Program valgrind found: YES (/usr/bin/valgrind) 00:02:30.601 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.601 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.601 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.601 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.601 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:30.601 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:30.601 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.601 Build targets in project: 8 00:02:30.601 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:30.601 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:30.601 00:02:30.601 libvfio-user 0.0.1 00:02:30.601 00:02:30.601 User defined options 00:02:30.601 buildtype : debug 00:02:30.601 default_library: shared 00:02:30.601 libdir : /usr/local/lib 00:02:30.601 00:02:30.601 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.166 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.166 [1/37] Compiling C object samples/null.p/null.c.o 00:02:31.166 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:31.166 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:31.166 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:31.166 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:31.166 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:31.166 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:31.166 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:31.166 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:31.166 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:31.166 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:31.166 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:31.166 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:31.166 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:31.166 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:31.166 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:31.166 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:31.166 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:31.166 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:31.166 [20/37] Compiling C object samples/server.p/server.c.o 00:02:31.166 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:31.166 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:31.166 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:31.425 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:31.425 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:31.425 [26/37] Compiling C object samples/client.p/client.c.o 00:02:31.425 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:31.425 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.425 [29/37] Linking target samples/client 00:02:31.425 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:31.425 [31/37] Linking target test/unit_tests 00:02:31.425 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:31.684 [33/37] Linking target samples/server 00:02:31.684 [34/37] Linking target samples/lspci 00:02:31.684 [35/37] Linking target samples/null 00:02:31.684 [36/37] Linking target samples/gpio-pci-idio-16 00:02:31.684 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:31.684 INFO: autodetecting backend as ninja 00:02:31.684 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.684 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.942 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.942 ninja: no work to do. 00:02:58.490 CC lib/ut/ut.o 00:02:58.490 CC lib/log/log.o 00:02:58.490 CC lib/log/log_flags.o 00:02:58.749 CC lib/log/log_deprecated.o 00:02:58.749 CC lib/ut_mock/mock.o 00:02:58.749 LIB libspdk_ut_mock.a 00:02:58.749 LIB libspdk_ut.a 00:02:58.749 LIB libspdk_log.a 00:02:58.749 SO libspdk_ut_mock.so.6.0 00:02:58.749 SO libspdk_ut.so.2.0 00:02:58.749 SO libspdk_log.so.7.1 00:02:58.749 SYMLINK libspdk_ut_mock.so 00:02:59.009 SYMLINK libspdk_ut.so 00:02:59.009 SYMLINK libspdk_log.so 00:02:59.267 CC lib/ioat/ioat.o 00:02:59.267 CXX lib/trace_parser/trace.o 00:02:59.267 CC lib/util/base64.o 00:02:59.268 CC lib/util/bit_array.o 00:02:59.268 CC lib/dma/dma.o 00:02:59.268 CC lib/util/cpuset.o 00:02:59.268 CC lib/util/crc16.o 00:02:59.268 CC lib/util/crc32.o 00:02:59.268 CC lib/util/crc32c.o 00:02:59.268 CC lib/util/crc32_ieee.o 00:02:59.268 CC lib/util/crc64.o 00:02:59.268 CC lib/util/dif.o 00:02:59.268 CC lib/util/fd.o 00:02:59.268 CC lib/util/fd_group.o 00:02:59.268 CC lib/util/file.o 00:02:59.268 CC lib/util/hexlify.o 00:02:59.268 CC lib/util/iov.o 00:02:59.268 CC lib/util/math.o 00:02:59.268 CC lib/util/net.o 00:02:59.268 CC lib/util/pipe.o 00:02:59.268 CC lib/util/strerror_tls.o 00:02:59.268 CC lib/util/string.o 00:02:59.268 CC lib/util/uuid.o 00:02:59.268 CC lib/util/xor.o 00:02:59.268 CC lib/util/zipf.o 00:02:59.268 CC lib/util/md5.o 00:02:59.527 CC lib/vfio_user/host/vfio_user.o 00:02:59.527 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.527 LIB libspdk_dma.a 00:02:59.527 SO libspdk_dma.so.5.0 00:02:59.527 LIB libspdk_ioat.a 00:02:59.527 SYMLINK libspdk_dma.so 00:02:59.527 SO libspdk_ioat.so.7.0 00:02:59.527 SYMLINK libspdk_ioat.so 00:02:59.786 LIB libspdk_vfio_user.a 00:02:59.786 SO libspdk_vfio_user.so.5.0 00:02:59.786 LIB libspdk_util.a 00:02:59.786 SYMLINK libspdk_vfio_user.so 00:02:59.786 SO libspdk_util.so.10.1 00:03:00.045 SYMLINK libspdk_util.so 00:03:00.045 LIB libspdk_trace_parser.a 00:03:00.045 SO libspdk_trace_parser.so.6.0 00:03:00.045 SYMLINK libspdk_trace_parser.so 00:03:00.304 CC lib/conf/conf.o 00:03:00.304 CC lib/json/json_parse.o 00:03:00.304 CC lib/json/json_util.o 00:03:00.304 CC lib/env_dpdk/env.o 00:03:00.304 CC lib/idxd/idxd.o 00:03:00.304 CC lib/json/json_write.o 00:03:00.304 CC lib/env_dpdk/memory.o 00:03:00.304 CC lib/idxd/idxd_user.o 00:03:00.304 CC lib/rdma_utils/rdma_utils.o 00:03:00.304 CC lib/idxd/idxd_kernel.o 00:03:00.304 CC lib/env_dpdk/pci.o 00:03:00.304 CC lib/env_dpdk/init.o 00:03:00.304 CC lib/env_dpdk/threads.o 00:03:00.304 CC lib/env_dpdk/pci_ioat.o 00:03:00.304 CC lib/vmd/vmd.o 00:03:00.304 CC lib/env_dpdk/pci_virtio.o 00:03:00.304 CC lib/vmd/led.o 00:03:00.304 CC lib/env_dpdk/pci_vmd.o 00:03:00.304 CC lib/env_dpdk/pci_idxd.o 00:03:00.304 CC lib/env_dpdk/pci_event.o 00:03:00.304 CC lib/env_dpdk/sigbus_handler.o 00:03:00.304 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.304 CC lib/env_dpdk/pci_dpdk.o 00:03:00.304 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.563 LIB libspdk_conf.a 00:03:00.563 SO libspdk_conf.so.6.0 00:03:00.563 LIB libspdk_rdma_utils.a 00:03:00.563 LIB libspdk_json.a 00:03:00.563 SO libspdk_rdma_utils.so.1.0 00:03:00.563 SYMLINK libspdk_conf.so 00:03:00.563 SO libspdk_json.so.6.0 00:03:00.563 SYMLINK libspdk_rdma_utils.so 00:03:00.563 SYMLINK libspdk_json.so 00:03:00.822 LIB libspdk_idxd.a 00:03:00.822 SO libspdk_idxd.so.12.1 00:03:00.822 LIB libspdk_vmd.a 00:03:00.822 SO libspdk_vmd.so.6.0 00:03:00.822 SYMLINK libspdk_idxd.so 00:03:01.081 SYMLINK libspdk_vmd.so 00:03:01.081 CC lib/rdma_provider/common.o 00:03:01.081 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.081 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.081 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.081 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.081 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.081 LIB libspdk_rdma_provider.a 00:03:01.340 LIB libspdk_jsonrpc.a 00:03:01.340 SO libspdk_rdma_provider.so.7.0 00:03:01.340 SO libspdk_jsonrpc.so.6.0 00:03:01.340 SYMLINK libspdk_rdma_provider.so 00:03:01.340 SYMLINK libspdk_jsonrpc.so 00:03:01.340 LIB libspdk_env_dpdk.a 00:03:01.340 SO libspdk_env_dpdk.so.15.1 00:03:01.599 SYMLINK libspdk_env_dpdk.so 00:03:01.599 CC lib/rpc/rpc.o 00:03:01.857 LIB libspdk_rpc.a 00:03:01.857 SO libspdk_rpc.so.6.0 00:03:01.857 SYMLINK libspdk_rpc.so 00:03:02.426 CC lib/notify/notify.o 00:03:02.426 CC lib/notify/notify_rpc.o 00:03:02.426 CC lib/trace/trace.o 00:03:02.426 CC lib/trace/trace_flags.o 00:03:02.426 CC lib/trace/trace_rpc.o 00:03:02.426 CC lib/keyring/keyring.o 00:03:02.426 CC lib/keyring/keyring_rpc.o 00:03:02.426 LIB libspdk_notify.a 00:03:02.426 SO libspdk_notify.so.6.0 00:03:02.426 LIB libspdk_keyring.a 00:03:02.687 LIB libspdk_trace.a 00:03:02.687 SO libspdk_keyring.so.2.0 00:03:02.687 SYMLINK libspdk_notify.so 00:03:02.687 SO libspdk_trace.so.11.0 00:03:02.687 SYMLINK libspdk_keyring.so 00:03:02.687 SYMLINK libspdk_trace.so 00:03:02.946 CC lib/sock/sock.o 00:03:02.946 CC lib/thread/thread.o 00:03:02.946 CC lib/sock/sock_rpc.o 00:03:02.946 CC lib/thread/iobuf.o 00:03:03.205 LIB libspdk_sock.a 00:03:03.464 SO libspdk_sock.so.10.0 00:03:03.464 SYMLINK libspdk_sock.so 00:03:03.724 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.724 CC lib/nvme/nvme_ctrlr.o 00:03:03.724 CC lib/nvme/nvme_fabric.o 00:03:03.724 CC lib/nvme/nvme_ns_cmd.o 00:03:03.724 CC lib/nvme/nvme_ns.o 00:03:03.724 CC lib/nvme/nvme_pcie_common.o 00:03:03.724 CC lib/nvme/nvme_pcie.o 00:03:03.724 CC lib/nvme/nvme_qpair.o 00:03:03.724 CC lib/nvme/nvme_quirks.o 00:03:03.724 CC lib/nvme/nvme.o 00:03:03.724 CC lib/nvme/nvme_transport.o 00:03:03.724 CC lib/nvme/nvme_discovery.o 00:03:03.724 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.724 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.724 CC lib/nvme/nvme_tcp.o 00:03:03.724 CC lib/nvme/nvme_opal.o 00:03:03.724 CC lib/nvme/nvme_io_msg.o 00:03:03.724 CC lib/nvme/nvme_poll_group.o 00:03:03.724 CC lib/nvme/nvme_zns.o 00:03:03.724 CC lib/nvme/nvme_stubs.o 00:03:03.724 CC lib/nvme/nvme_auth.o 00:03:03.724 CC lib/nvme/nvme_cuse.o 00:03:03.724 CC lib/nvme/nvme_vfio_user.o 00:03:03.724 CC lib/nvme/nvme_rdma.o 00:03:03.983 LIB libspdk_thread.a 00:03:04.241 SO libspdk_thread.so.11.0 00:03:04.241 SYMLINK libspdk_thread.so 00:03:04.499 CC lib/fsdev/fsdev.o 00:03:04.499 CC lib/fsdev/fsdev_io.o 00:03:04.499 CC lib/fsdev/fsdev_rpc.o 00:03:04.499 CC lib/accel/accel.o 00:03:04.499 CC lib/blob/request.o 00:03:04.499 CC lib/blob/blobstore.o 00:03:04.499 CC lib/accel/accel_rpc.o 00:03:04.499 CC lib/blob/zeroes.o 00:03:04.499 CC lib/accel/accel_sw.o 00:03:04.499 CC lib/blob/blob_bs_dev.o 00:03:04.499 CC lib/virtio/virtio.o 00:03:04.499 CC lib/virtio/virtio_vhost_user.o 00:03:04.499 CC lib/virtio/virtio_vfio_user.o 00:03:04.499 CC lib/virtio/virtio_pci.o 00:03:04.499 CC lib/init/json_config.o 00:03:04.499 CC lib/init/rpc.o 00:03:04.499 CC lib/init/subsystem.o 00:03:04.499 CC lib/vfu_tgt/tgt_endpoint.o 00:03:04.499 CC lib/init/subsystem_rpc.o 00:03:04.499 CC lib/vfu_tgt/tgt_rpc.o 00:03:04.758 LIB libspdk_init.a 00:03:04.758 SO libspdk_init.so.6.0 00:03:04.758 LIB libspdk_vfu_tgt.a 00:03:04.758 LIB libspdk_virtio.a 00:03:04.758 SO libspdk_vfu_tgt.so.3.0 00:03:05.017 SYMLINK libspdk_init.so 00:03:05.017 SO libspdk_virtio.so.7.0 00:03:05.017 SYMLINK libspdk_vfu_tgt.so 00:03:05.017 SYMLINK libspdk_virtio.so 00:03:05.017 LIB libspdk_fsdev.a 00:03:05.017 SO libspdk_fsdev.so.2.0 00:03:05.276 SYMLINK libspdk_fsdev.so 00:03:05.276 CC lib/event/app.o 00:03:05.276 CC lib/event/reactor.o 00:03:05.276 CC lib/event/log_rpc.o 00:03:05.276 CC lib/event/app_rpc.o 00:03:05.276 CC lib/event/scheduler_static.o 00:03:05.276 LIB libspdk_accel.a 00:03:05.535 SO libspdk_accel.so.16.0 00:03:05.535 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:05.535 SYMLINK libspdk_accel.so 00:03:05.535 LIB libspdk_nvme.a 00:03:05.535 LIB libspdk_event.a 00:03:05.535 SO libspdk_event.so.14.0 00:03:05.535 SO libspdk_nvme.so.15.0 00:03:05.793 SYMLINK libspdk_event.so 00:03:05.793 SYMLINK libspdk_nvme.so 00:03:05.793 CC lib/bdev/bdev.o 00:03:05.793 CC lib/bdev/bdev_rpc.o 00:03:05.793 CC lib/bdev/bdev_zone.o 00:03:05.794 CC lib/bdev/part.o 00:03:05.794 CC lib/bdev/scsi_nvme.o 00:03:06.053 LIB libspdk_fuse_dispatcher.a 00:03:06.053 SO libspdk_fuse_dispatcher.so.1.0 00:03:06.053 SYMLINK libspdk_fuse_dispatcher.so 00:03:06.621 LIB libspdk_blob.a 00:03:06.621 SO libspdk_blob.so.12.0 00:03:06.880 SYMLINK libspdk_blob.so 00:03:07.138 CC lib/lvol/lvol.o 00:03:07.138 CC lib/blobfs/blobfs.o 00:03:07.138 CC lib/blobfs/tree.o 00:03:07.706 LIB libspdk_bdev.a 00:03:07.706 SO libspdk_bdev.so.17.0 00:03:07.706 LIB libspdk_blobfs.a 00:03:07.706 LIB libspdk_lvol.a 00:03:07.706 SO libspdk_blobfs.so.11.0 00:03:07.706 SO libspdk_lvol.so.11.0 00:03:07.706 SYMLINK libspdk_bdev.so 00:03:07.706 SYMLINK libspdk_blobfs.so 00:03:07.970 SYMLINK libspdk_lvol.so 00:03:08.239 CC lib/nbd/nbd.o 00:03:08.239 CC lib/nbd/nbd_rpc.o 00:03:08.239 CC lib/ftl/ftl_core.o 00:03:08.239 CC lib/ftl/ftl_init.o 00:03:08.239 CC lib/ftl/ftl_layout.o 00:03:08.239 CC lib/ftl/ftl_debug.o 00:03:08.239 CC lib/ftl/ftl_io.o 00:03:08.239 CC lib/ublk/ublk.o 00:03:08.239 CC lib/ublk/ublk_rpc.o 00:03:08.239 CC lib/ftl/ftl_sb.o 00:03:08.239 CC lib/ftl/ftl_l2p.o 00:03:08.239 CC lib/nvmf/ctrlr.o 00:03:08.239 CC lib/scsi/dev.o 00:03:08.239 CC lib/ftl/ftl_l2p_flat.o 00:03:08.239 CC lib/scsi/lun.o 00:03:08.239 CC lib/nvmf/ctrlr_discovery.o 00:03:08.239 CC lib/ftl/ftl_nv_cache.o 00:03:08.239 CC lib/scsi/port.o 00:03:08.239 CC lib/nvmf/ctrlr_bdev.o 00:03:08.239 CC lib/ftl/ftl_band.o 00:03:08.239 CC lib/scsi/scsi.o 00:03:08.239 CC lib/nvmf/subsystem.o 00:03:08.239 CC lib/ftl/ftl_band_ops.o 00:03:08.239 CC lib/nvmf/nvmf.o 00:03:08.239 CC lib/scsi/scsi_bdev.o 00:03:08.239 CC lib/nvmf/nvmf_rpc.o 00:03:08.239 CC lib/ftl/ftl_writer.o 00:03:08.239 CC lib/ftl/ftl_rq.o 00:03:08.239 CC lib/nvmf/transport.o 00:03:08.239 CC lib/scsi/scsi_pr.o 00:03:08.239 CC lib/scsi/scsi_rpc.o 00:03:08.239 CC lib/nvmf/tcp.o 00:03:08.239 CC lib/ftl/ftl_reloc.o 00:03:08.239 CC lib/ftl/ftl_p2l.o 00:03:08.239 CC lib/nvmf/stubs.o 00:03:08.239 CC lib/ftl/ftl_l2p_cache.o 00:03:08.239 CC lib/scsi/task.o 00:03:08.239 CC lib/ftl/ftl_p2l_log.o 00:03:08.239 CC lib/nvmf/mdns_server.o 00:03:08.239 CC lib/nvmf/vfio_user.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.239 CC lib/nvmf/rdma.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.239 CC lib/nvmf/auth.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.239 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.239 CC lib/ftl/utils/ftl_conf.o 00:03:08.239 CC lib/ftl/utils/ftl_md.o 00:03:08.239 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.239 CC lib/ftl/utils/ftl_mempool.o 00:03:08.239 CC lib/ftl/utils/ftl_property.o 00:03:08.239 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.239 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.239 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.239 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.239 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.239 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.239 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.239 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.239 CC lib/ftl/base/ftl_base_dev.o 00:03:08.239 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.239 CC lib/ftl/ftl_trace.o 00:03:08.888 LIB libspdk_nbd.a 00:03:08.888 LIB libspdk_scsi.a 00:03:08.888 SO libspdk_nbd.so.7.0 00:03:08.888 SO libspdk_scsi.so.9.0 00:03:08.888 SYMLINK libspdk_nbd.so 00:03:08.888 SYMLINK libspdk_scsi.so 00:03:09.207 LIB libspdk_ublk.a 00:03:09.208 SO libspdk_ublk.so.3.0 00:03:09.208 SYMLINK libspdk_ublk.so 00:03:09.208 LIB libspdk_ftl.a 00:03:09.208 CC lib/iscsi/conn.o 00:03:09.208 CC lib/iscsi/init_grp.o 00:03:09.208 CC lib/iscsi/iscsi.o 00:03:09.208 CC lib/iscsi/param.o 00:03:09.208 CC lib/iscsi/portal_grp.o 00:03:09.208 CC lib/iscsi/tgt_node.o 00:03:09.208 CC lib/iscsi/iscsi_subsystem.o 00:03:09.208 CC lib/iscsi/iscsi_rpc.o 00:03:09.208 CC lib/iscsi/task.o 00:03:09.208 CC lib/vhost/vhost.o 00:03:09.208 CC lib/vhost/vhost_rpc.o 00:03:09.208 CC lib/vhost/vhost_scsi.o 00:03:09.208 CC lib/vhost/vhost_blk.o 00:03:09.208 CC lib/vhost/rte_vhost_user.o 00:03:09.208 SO libspdk_ftl.so.9.0 00:03:09.466 SYMLINK libspdk_ftl.so 00:03:10.033 LIB libspdk_nvmf.a 00:03:10.033 LIB libspdk_vhost.a 00:03:10.033 SO libspdk_nvmf.so.20.0 00:03:10.033 SO libspdk_vhost.so.8.0 00:03:10.292 SYMLINK libspdk_vhost.so 00:03:10.292 LIB libspdk_iscsi.a 00:03:10.292 SYMLINK libspdk_nvmf.so 00:03:10.292 SO libspdk_iscsi.so.8.0 00:03:10.551 SYMLINK libspdk_iscsi.so 00:03:11.117 CC module/env_dpdk/env_dpdk_rpc.o 00:03:11.117 CC module/vfu_device/vfu_virtio.o 00:03:11.117 CC module/vfu_device/vfu_virtio_blk.o 00:03:11.117 CC module/vfu_device/vfu_virtio_scsi.o 00:03:11.117 CC module/vfu_device/vfu_virtio_rpc.o 00:03:11.117 CC module/vfu_device/vfu_virtio_fs.o 00:03:11.117 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:11.117 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:11.117 LIB libspdk_env_dpdk_rpc.a 00:03:11.117 CC module/accel/ioat/accel_ioat.o 00:03:11.117 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.117 CC module/accel/dsa/accel_dsa.o 00:03:11.117 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.117 CC module/sock/posix/posix.o 00:03:11.117 CC module/scheduler/gscheduler/gscheduler.o 00:03:11.117 CC module/blob/bdev/blob_bdev.o 00:03:11.118 CC module/accel/iaa/accel_iaa.o 00:03:11.118 CC module/keyring/file/keyring.o 00:03:11.118 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.118 CC module/accel/error/accel_error.o 00:03:11.118 CC module/accel/error/accel_error_rpc.o 00:03:11.118 CC module/fsdev/aio/fsdev_aio.o 00:03:11.118 CC module/keyring/file/keyring_rpc.o 00:03:11.118 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:11.118 CC module/fsdev/aio/linux_aio_mgr.o 00:03:11.118 CC module/keyring/linux/keyring.o 00:03:11.118 CC module/keyring/linux/keyring_rpc.o 00:03:11.118 SO libspdk_env_dpdk_rpc.so.6.0 00:03:11.376 SYMLINK libspdk_env_dpdk_rpc.so 00:03:11.376 LIB libspdk_scheduler_dpdk_governor.a 00:03:11.376 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:11.376 LIB libspdk_scheduler_dynamic.a 00:03:11.376 LIB libspdk_scheduler_gscheduler.a 00:03:11.376 LIB libspdk_keyring_linux.a 00:03:11.376 LIB libspdk_accel_ioat.a 00:03:11.376 LIB libspdk_keyring_file.a 00:03:11.376 SO libspdk_keyring_linux.so.1.0 00:03:11.376 SO libspdk_accel_ioat.so.6.0 00:03:11.376 SO libspdk_scheduler_dynamic.so.4.0 00:03:11.376 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.376 SO libspdk_keyring_file.so.2.0 00:03:11.376 LIB libspdk_accel_error.a 00:03:11.376 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:11.376 LIB libspdk_accel_iaa.a 00:03:11.376 SYMLINK libspdk_keyring_linux.so 00:03:11.376 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.376 SO libspdk_accel_iaa.so.3.0 00:03:11.376 SYMLINK libspdk_scheduler_dynamic.so 00:03:11.376 SO libspdk_accel_error.so.2.0 00:03:11.376 SYMLINK libspdk_accel_ioat.so 00:03:11.376 SYMLINK libspdk_keyring_file.so 00:03:11.376 LIB libspdk_blob_bdev.a 00:03:11.376 LIB libspdk_accel_dsa.a 00:03:11.376 SYMLINK libspdk_accel_iaa.so 00:03:11.635 SO libspdk_blob_bdev.so.12.0 00:03:11.635 SYMLINK libspdk_accel_error.so 00:03:11.635 SO libspdk_accel_dsa.so.5.0 00:03:11.635 LIB libspdk_vfu_device.a 00:03:11.635 SYMLINK libspdk_blob_bdev.so 00:03:11.635 SYMLINK libspdk_accel_dsa.so 00:03:11.635 SO libspdk_vfu_device.so.3.0 00:03:11.635 SYMLINK libspdk_vfu_device.so 00:03:11.895 LIB libspdk_fsdev_aio.a 00:03:11.895 LIB libspdk_sock_posix.a 00:03:11.895 SO libspdk_fsdev_aio.so.1.0 00:03:11.895 SO libspdk_sock_posix.so.6.0 00:03:11.895 SYMLINK libspdk_fsdev_aio.so 00:03:11.895 SYMLINK libspdk_sock_posix.so 00:03:12.154 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.154 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.154 CC module/bdev/gpt/gpt.o 00:03:12.154 CC module/bdev/error/vbdev_error.o 00:03:12.154 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.154 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.154 CC module/bdev/malloc/bdev_malloc.o 00:03:12.154 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.154 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.154 CC module/bdev/delay/vbdev_delay.o 00:03:12.154 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.154 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.154 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.154 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.154 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.154 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.154 CC module/bdev/nvme/bdev_nvme.o 00:03:12.154 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.154 CC module/bdev/nvme/nvme_rpc.o 00:03:12.154 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.154 CC module/bdev/ftl/bdev_ftl.o 00:03:12.154 CC module/bdev/nvme/vbdev_opal.o 00:03:12.154 CC module/bdev/split/vbdev_split.o 00:03:12.154 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.154 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.154 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.154 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.154 CC module/bdev/null/bdev_null.o 00:03:12.154 CC module/bdev/raid/bdev_raid.o 00:03:12.154 CC module/bdev/null/bdev_null_rpc.o 00:03:12.154 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.154 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.154 CC module/bdev/raid/raid0.o 00:03:12.154 CC module/bdev/raid/raid1.o 00:03:12.154 CC module/bdev/raid/concat.o 00:03:12.154 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.154 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.154 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.154 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.154 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.154 CC module/bdev/aio/bdev_aio.o 00:03:12.154 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.413 LIB libspdk_blobfs_bdev.a 00:03:12.413 SO libspdk_blobfs_bdev.so.6.0 00:03:12.413 LIB libspdk_bdev_error.a 00:03:12.413 LIB libspdk_bdev_split.a 00:03:12.413 SO libspdk_bdev_error.so.6.0 00:03:12.413 SYMLINK libspdk_blobfs_bdev.so 00:03:12.413 LIB libspdk_bdev_gpt.a 00:03:12.413 LIB libspdk_bdev_ftl.a 00:03:12.413 LIB libspdk_bdev_passthru.a 00:03:12.413 SO libspdk_bdev_gpt.so.6.0 00:03:12.413 SO libspdk_bdev_split.so.6.0 00:03:12.413 LIB libspdk_bdev_null.a 00:03:12.413 SYMLINK libspdk_bdev_error.so 00:03:12.413 SO libspdk_bdev_ftl.so.6.0 00:03:12.413 SO libspdk_bdev_passthru.so.6.0 00:03:12.413 LIB libspdk_bdev_zone_block.a 00:03:12.413 LIB libspdk_bdev_iscsi.a 00:03:12.413 SO libspdk_bdev_null.so.6.0 00:03:12.413 LIB libspdk_bdev_aio.a 00:03:12.671 SYMLINK libspdk_bdev_gpt.so 00:03:12.671 SO libspdk_bdev_zone_block.so.6.0 00:03:12.671 SYMLINK libspdk_bdev_split.so 00:03:12.671 LIB libspdk_bdev_malloc.a 00:03:12.671 SYMLINK libspdk_bdev_ftl.so 00:03:12.671 SYMLINK libspdk_bdev_passthru.so 00:03:12.671 LIB libspdk_bdev_delay.a 00:03:12.671 SO libspdk_bdev_iscsi.so.6.0 00:03:12.671 SO libspdk_bdev_aio.so.6.0 00:03:12.671 SYMLINK libspdk_bdev_null.so 00:03:12.671 SO libspdk_bdev_malloc.so.6.0 00:03:12.671 SO libspdk_bdev_delay.so.6.0 00:03:12.671 SYMLINK libspdk_bdev_zone_block.so 00:03:12.671 SYMLINK libspdk_bdev_iscsi.so 00:03:12.671 SYMLINK libspdk_bdev_aio.so 00:03:12.671 SYMLINK libspdk_bdev_delay.so 00:03:12.671 SYMLINK libspdk_bdev_malloc.so 00:03:12.671 LIB libspdk_bdev_lvol.a 00:03:12.671 LIB libspdk_bdev_virtio.a 00:03:12.671 SO libspdk_bdev_lvol.so.6.0 00:03:12.671 SO libspdk_bdev_virtio.so.6.0 00:03:12.671 SYMLINK libspdk_bdev_lvol.so 00:03:12.930 SYMLINK libspdk_bdev_virtio.so 00:03:12.930 LIB libspdk_bdev_raid.a 00:03:12.930 SO libspdk_bdev_raid.so.6.0 00:03:13.189 SYMLINK libspdk_bdev_raid.so 00:03:14.126 LIB libspdk_bdev_nvme.a 00:03:14.126 SO libspdk_bdev_nvme.so.7.1 00:03:14.126 SYMLINK libspdk_bdev_nvme.so 00:03:15.063 CC module/event/subsystems/sock/sock.o 00:03:15.063 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.064 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.064 CC module/event/subsystems/vmd/vmd.o 00:03:15.064 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.064 CC module/event/subsystems/keyring/keyring.o 00:03:15.064 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.064 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.064 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.064 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.064 LIB libspdk_event_keyring.a 00:03:15.064 LIB libspdk_event_vhost_blk.a 00:03:15.064 LIB libspdk_event_sock.a 00:03:15.064 LIB libspdk_event_vmd.a 00:03:15.064 LIB libspdk_event_fsdev.a 00:03:15.064 LIB libspdk_event_iobuf.a 00:03:15.064 LIB libspdk_event_scheduler.a 00:03:15.064 LIB libspdk_event_vfu_tgt.a 00:03:15.064 SO libspdk_event_keyring.so.1.0 00:03:15.064 SO libspdk_event_vhost_blk.so.3.0 00:03:15.064 SO libspdk_event_sock.so.5.0 00:03:15.064 SO libspdk_event_fsdev.so.1.0 00:03:15.064 SO libspdk_event_vmd.so.6.0 00:03:15.064 SO libspdk_event_iobuf.so.3.0 00:03:15.064 SO libspdk_event_scheduler.so.4.0 00:03:15.064 SO libspdk_event_vfu_tgt.so.3.0 00:03:15.064 SYMLINK libspdk_event_keyring.so 00:03:15.064 SYMLINK libspdk_event_vhost_blk.so 00:03:15.064 SYMLINK libspdk_event_vmd.so 00:03:15.064 SYMLINK libspdk_event_fsdev.so 00:03:15.064 SYMLINK libspdk_event_sock.so 00:03:15.064 SYMLINK libspdk_event_scheduler.so 00:03:15.064 SYMLINK libspdk_event_iobuf.so 00:03:15.064 SYMLINK libspdk_event_vfu_tgt.so 00:03:15.632 CC module/event/subsystems/accel/accel.o 00:03:15.632 LIB libspdk_event_accel.a 00:03:15.632 SO libspdk_event_accel.so.6.0 00:03:15.632 SYMLINK libspdk_event_accel.so 00:03:16.200 CC module/event/subsystems/bdev/bdev.o 00:03:16.200 LIB libspdk_event_bdev.a 00:03:16.200 SO libspdk_event_bdev.so.6.0 00:03:16.460 SYMLINK libspdk_event_bdev.so 00:03:16.719 CC module/event/subsystems/scsi/scsi.o 00:03:16.719 CC module/event/subsystems/nbd/nbd.o 00:03:16.719 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.719 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.719 CC module/event/subsystems/ublk/ublk.o 00:03:16.978 LIB libspdk_event_ublk.a 00:03:16.978 LIB libspdk_event_nbd.a 00:03:16.978 LIB libspdk_event_scsi.a 00:03:16.978 SO libspdk_event_ublk.so.3.0 00:03:16.978 SO libspdk_event_nbd.so.6.0 00:03:16.978 SO libspdk_event_scsi.so.6.0 00:03:16.978 SYMLINK libspdk_event_ublk.so 00:03:16.978 LIB libspdk_event_nvmf.a 00:03:16.978 SYMLINK libspdk_event_nbd.so 00:03:16.978 SYMLINK libspdk_event_scsi.so 00:03:16.978 SO libspdk_event_nvmf.so.6.0 00:03:16.978 SYMLINK libspdk_event_nvmf.so 00:03:17.237 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.237 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.495 LIB libspdk_event_vhost_scsi.a 00:03:17.495 LIB libspdk_event_iscsi.a 00:03:17.495 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.495 SO libspdk_event_iscsi.so.6.0 00:03:17.495 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.495 SYMLINK libspdk_event_iscsi.so 00:03:17.754 SO libspdk.so.6.0 00:03:17.754 SYMLINK libspdk.so 00:03:18.328 CC test/rpc_client/rpc_client_test.o 00:03:18.328 TEST_HEADER include/spdk/accel.h 00:03:18.328 TEST_HEADER include/spdk/assert.h 00:03:18.328 TEST_HEADER include/spdk/accel_module.h 00:03:18.328 TEST_HEADER include/spdk/base64.h 00:03:18.328 TEST_HEADER include/spdk/barrier.h 00:03:18.328 TEST_HEADER include/spdk/bdev.h 00:03:18.328 TEST_HEADER include/spdk/bdev_module.h 00:03:18.328 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.328 TEST_HEADER include/spdk/bit_array.h 00:03:18.328 TEST_HEADER include/spdk/bit_pool.h 00:03:18.328 CXX app/trace/trace.o 00:03:18.328 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.328 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.328 TEST_HEADER include/spdk/blobfs.h 00:03:18.328 TEST_HEADER include/spdk/blob.h 00:03:18.328 TEST_HEADER include/spdk/conf.h 00:03:18.328 TEST_HEADER include/spdk/config.h 00:03:18.328 TEST_HEADER include/spdk/cpuset.h 00:03:18.328 CC app/spdk_nvme_perf/perf.o 00:03:18.328 TEST_HEADER include/spdk/crc16.h 00:03:18.328 CC app/trace_record/trace_record.o 00:03:18.328 TEST_HEADER include/spdk/crc32.h 00:03:18.328 TEST_HEADER include/spdk/crc64.h 00:03:18.328 TEST_HEADER include/spdk/dif.h 00:03:18.328 CC app/spdk_top/spdk_top.o 00:03:18.328 CC app/spdk_lspci/spdk_lspci.o 00:03:18.328 TEST_HEADER include/spdk/endian.h 00:03:18.328 TEST_HEADER include/spdk/dma.h 00:03:18.328 TEST_HEADER include/spdk/env.h 00:03:18.328 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.328 CC app/spdk_nvme_identify/identify.o 00:03:18.328 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.328 TEST_HEADER include/spdk/event.h 00:03:18.328 TEST_HEADER include/spdk/fd_group.h 00:03:18.328 TEST_HEADER include/spdk/fd.h 00:03:18.328 TEST_HEADER include/spdk/fsdev.h 00:03:18.328 TEST_HEADER include/spdk/file.h 00:03:18.328 TEST_HEADER include/spdk/ftl.h 00:03:18.328 TEST_HEADER include/spdk/fsdev_module.h 00:03:18.328 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.328 TEST_HEADER include/spdk/hexlify.h 00:03:18.328 TEST_HEADER include/spdk/histogram_data.h 00:03:18.328 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.328 TEST_HEADER include/spdk/idxd.h 00:03:18.328 TEST_HEADER include/spdk/init.h 00:03:18.328 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.328 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.328 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.328 TEST_HEADER include/spdk/ioat.h 00:03:18.328 TEST_HEADER include/spdk/json.h 00:03:18.328 TEST_HEADER include/spdk/keyring_module.h 00:03:18.328 TEST_HEADER include/spdk/likely.h 00:03:18.328 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.328 TEST_HEADER include/spdk/keyring.h 00:03:18.328 TEST_HEADER include/spdk/log.h 00:03:18.328 TEST_HEADER include/spdk/lvol.h 00:03:18.328 TEST_HEADER include/spdk/memory.h 00:03:18.328 TEST_HEADER include/spdk/md5.h 00:03:18.328 TEST_HEADER include/spdk/mmio.h 00:03:18.328 TEST_HEADER include/spdk/nbd.h 00:03:18.328 TEST_HEADER include/spdk/net.h 00:03:18.328 TEST_HEADER include/spdk/notify.h 00:03:18.328 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.328 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.328 TEST_HEADER include/spdk/nvme.h 00:03:18.328 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.328 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.328 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.328 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.328 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.328 TEST_HEADER include/spdk/nvmf.h 00:03:18.328 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.328 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.328 TEST_HEADER include/spdk/opal.h 00:03:18.328 TEST_HEADER include/spdk/opal_spec.h 00:03:18.328 CC app/spdk_dd/spdk_dd.o 00:03:18.328 TEST_HEADER include/spdk/pci_ids.h 00:03:18.328 CC app/nvmf_tgt/nvmf_main.o 00:03:18.328 TEST_HEADER include/spdk/pipe.h 00:03:18.328 TEST_HEADER include/spdk/queue.h 00:03:18.328 TEST_HEADER include/spdk/reduce.h 00:03:18.328 TEST_HEADER include/spdk/rpc.h 00:03:18.328 TEST_HEADER include/spdk/scsi.h 00:03:18.328 TEST_HEADER include/spdk/scheduler.h 00:03:18.328 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.328 TEST_HEADER include/spdk/sock.h 00:03:18.328 TEST_HEADER include/spdk/thread.h 00:03:18.328 TEST_HEADER include/spdk/stdinc.h 00:03:18.328 TEST_HEADER include/spdk/string.h 00:03:18.328 TEST_HEADER include/spdk/trace_parser.h 00:03:18.328 TEST_HEADER include/spdk/tree.h 00:03:18.328 TEST_HEADER include/spdk/ublk.h 00:03:18.328 TEST_HEADER include/spdk/trace.h 00:03:18.328 TEST_HEADER include/spdk/util.h 00:03:18.328 TEST_HEADER include/spdk/uuid.h 00:03:18.328 TEST_HEADER include/spdk/version.h 00:03:18.328 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.328 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.328 TEST_HEADER include/spdk/vhost.h 00:03:18.328 TEST_HEADER include/spdk/vmd.h 00:03:18.328 TEST_HEADER include/spdk/zipf.h 00:03:18.328 TEST_HEADER include/spdk/xor.h 00:03:18.328 CXX test/cpp_headers/accel.o 00:03:18.328 CXX test/cpp_headers/assert.o 00:03:18.328 CXX test/cpp_headers/accel_module.o 00:03:18.328 CXX test/cpp_headers/barrier.o 00:03:18.328 CXX test/cpp_headers/base64.o 00:03:18.328 CXX test/cpp_headers/bdev_module.o 00:03:18.328 CXX test/cpp_headers/bdev.o 00:03:18.328 CXX test/cpp_headers/bdev_zone.o 00:03:18.328 CXX test/cpp_headers/bit_pool.o 00:03:18.328 CXX test/cpp_headers/bit_array.o 00:03:18.328 CXX test/cpp_headers/blob_bdev.o 00:03:18.328 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.328 CXX test/cpp_headers/blobfs.o 00:03:18.328 CC app/spdk_tgt/spdk_tgt.o 00:03:18.328 CXX test/cpp_headers/blob.o 00:03:18.328 CXX test/cpp_headers/conf.o 00:03:18.328 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.328 CXX test/cpp_headers/config.o 00:03:18.328 CXX test/cpp_headers/crc16.o 00:03:18.328 CXX test/cpp_headers/cpuset.o 00:03:18.328 CXX test/cpp_headers/crc64.o 00:03:18.328 CXX test/cpp_headers/crc32.o 00:03:18.328 CXX test/cpp_headers/dif.o 00:03:18.328 CXX test/cpp_headers/endian.o 00:03:18.328 CXX test/cpp_headers/dma.o 00:03:18.328 CXX test/cpp_headers/env.o 00:03:18.328 CXX test/cpp_headers/env_dpdk.o 00:03:18.328 CXX test/cpp_headers/event.o 00:03:18.328 CXX test/cpp_headers/fd_group.o 00:03:18.328 CXX test/cpp_headers/fsdev.o 00:03:18.328 CXX test/cpp_headers/fd.o 00:03:18.328 CXX test/cpp_headers/fsdev_module.o 00:03:18.328 CXX test/cpp_headers/file.o 00:03:18.328 CXX test/cpp_headers/ftl.o 00:03:18.328 CXX test/cpp_headers/hexlify.o 00:03:18.328 CXX test/cpp_headers/gpt_spec.o 00:03:18.328 CXX test/cpp_headers/histogram_data.o 00:03:18.328 CXX test/cpp_headers/idxd.o 00:03:18.328 CXX test/cpp_headers/idxd_spec.o 00:03:18.328 CXX test/cpp_headers/init.o 00:03:18.328 CXX test/cpp_headers/ioat_spec.o 00:03:18.328 CXX test/cpp_headers/ioat.o 00:03:18.328 CXX test/cpp_headers/json.o 00:03:18.328 CXX test/cpp_headers/iscsi_spec.o 00:03:18.328 CXX test/cpp_headers/jsonrpc.o 00:03:18.328 CXX test/cpp_headers/keyring.o 00:03:18.328 CXX test/cpp_headers/keyring_module.o 00:03:18.328 CXX test/cpp_headers/likely.o 00:03:18.328 CXX test/cpp_headers/lvol.o 00:03:18.328 CXX test/cpp_headers/md5.o 00:03:18.328 CXX test/cpp_headers/log.o 00:03:18.328 CXX test/cpp_headers/mmio.o 00:03:18.328 CXX test/cpp_headers/memory.o 00:03:18.328 CXX test/cpp_headers/net.o 00:03:18.328 CXX test/cpp_headers/nbd.o 00:03:18.328 CXX test/cpp_headers/notify.o 00:03:18.328 CXX test/cpp_headers/nvme.o 00:03:18.328 CXX test/cpp_headers/nvme_intel.o 00:03:18.328 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.328 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.328 CXX test/cpp_headers/nvme_spec.o 00:03:18.328 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.328 CXX test/cpp_headers/nvme_zns.o 00:03:18.328 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.328 CXX test/cpp_headers/nvmf.o 00:03:18.328 CXX test/cpp_headers/nvmf_spec.o 00:03:18.328 CXX test/cpp_headers/nvmf_transport.o 00:03:18.328 CXX test/cpp_headers/opal.o 00:03:18.328 CXX test/cpp_headers/opal_spec.o 00:03:18.328 CXX test/cpp_headers/pci_ids.o 00:03:18.328 CC examples/util/zipf/zipf.o 00:03:18.328 CC examples/ioat/perf/perf.o 00:03:18.597 CC test/thread/poller_perf/poller_perf.o 00:03:18.597 CC test/app/histogram_perf/histogram_perf.o 00:03:18.597 CC test/env/memory/memory_ut.o 00:03:18.597 CC test/env/vtophys/vtophys.o 00:03:18.597 CC examples/ioat/verify/verify.o 00:03:18.597 CC test/env/pci/pci_ut.o 00:03:18.597 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.597 CC app/fio/nvme/fio_plugin.o 00:03:18.597 CC test/dma/test_dma/test_dma.o 00:03:18.597 CC test/app/jsoncat/jsoncat.o 00:03:18.597 CC test/app/stub/stub.o 00:03:18.597 CC test/app/bdev_svc/bdev_svc.o 00:03:18.597 LINK spdk_lspci 00:03:18.597 CC app/fio/bdev/fio_plugin.o 00:03:18.597 LINK rpc_client_test 00:03:18.858 LINK interrupt_tgt 00:03:18.858 LINK spdk_nvme_discover 00:03:18.858 LINK nvmf_tgt 00:03:18.858 CC test/env/mem_callbacks/mem_callbacks.o 00:03:18.858 CXX test/cpp_headers/pipe.o 00:03:18.858 CXX test/cpp_headers/queue.o 00:03:18.858 LINK iscsi_tgt 00:03:18.858 CXX test/cpp_headers/reduce.o 00:03:18.858 CXX test/cpp_headers/rpc.o 00:03:18.858 CXX test/cpp_headers/scheduler.o 00:03:18.858 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.858 CXX test/cpp_headers/scsi.o 00:03:18.858 CXX test/cpp_headers/scsi_spec.o 00:03:18.858 LINK zipf 00:03:18.858 LINK poller_perf 00:03:18.858 CXX test/cpp_headers/sock.o 00:03:18.858 CXX test/cpp_headers/stdinc.o 00:03:18.858 CXX test/cpp_headers/string.o 00:03:18.858 CXX test/cpp_headers/thread.o 00:03:18.858 CXX test/cpp_headers/trace.o 00:03:18.858 CXX test/cpp_headers/trace_parser.o 00:03:18.858 LINK spdk_tgt 00:03:18.858 CXX test/cpp_headers/tree.o 00:03:18.858 CXX test/cpp_headers/ublk.o 00:03:18.858 LINK jsoncat 00:03:18.858 CXX test/cpp_headers/util.o 00:03:18.858 CXX test/cpp_headers/uuid.o 00:03:19.118 CXX test/cpp_headers/version.o 00:03:19.118 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.118 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.118 CXX test/cpp_headers/vhost.o 00:03:19.118 CXX test/cpp_headers/vmd.o 00:03:19.118 CXX test/cpp_headers/xor.o 00:03:19.118 CXX test/cpp_headers/zipf.o 00:03:19.118 LINK spdk_trace_record 00:03:19.118 LINK histogram_perf 00:03:19.118 LINK vtophys 00:03:19.118 LINK bdev_svc 00:03:19.118 LINK verify 00:03:19.118 LINK env_dpdk_post_init 00:03:19.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.118 LINK stub 00:03:19.118 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.118 LINK ioat_perf 00:03:19.118 LINK spdk_dd 00:03:19.376 LINK spdk_trace 00:03:19.376 LINK pci_ut 00:03:19.376 LINK spdk_nvme 00:03:19.376 CC examples/idxd/perf/perf.o 00:03:19.376 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.376 CC examples/sock/hello_world/hello_sock.o 00:03:19.376 LINK spdk_bdev 00:03:19.376 CC examples/vmd/led/led.o 00:03:19.376 LINK nvme_fuzz 00:03:19.376 CC test/event/reactor_perf/reactor_perf.o 00:03:19.634 LINK spdk_nvme_perf 00:03:19.634 CC test/event/event_perf/event_perf.o 00:03:19.634 CC test/event/reactor/reactor.o 00:03:19.634 CC examples/thread/thread/thread_ex.o 00:03:19.634 CC test/event/app_repeat/app_repeat.o 00:03:19.634 CC test/event/scheduler/scheduler.o 00:03:19.634 LINK spdk_nvme_identify 00:03:19.634 LINK test_dma 00:03:19.634 LINK vhost_fuzz 00:03:19.634 LINK lsvmd 00:03:19.634 LINK led 00:03:19.634 LINK reactor_perf 00:03:19.634 LINK reactor 00:03:19.634 LINK event_perf 00:03:19.634 LINK mem_callbacks 00:03:19.634 CC app/vhost/vhost.o 00:03:19.634 LINK app_repeat 00:03:19.634 LINK spdk_top 00:03:19.634 LINK hello_sock 00:03:19.893 LINK idxd_perf 00:03:19.893 LINK thread 00:03:19.893 LINK scheduler 00:03:19.893 LINK vhost 00:03:20.152 LINK memory_ut 00:03:20.152 CC test/nvme/reserve/reserve.o 00:03:20.152 CC test/nvme/sgl/sgl.o 00:03:20.152 CC test/nvme/boot_partition/boot_partition.o 00:03:20.152 CC test/nvme/reset/reset.o 00:03:20.152 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.152 CC test/nvme/overhead/overhead.o 00:03:20.152 CC test/nvme/e2edp/nvme_dp.o 00:03:20.152 CC test/nvme/err_injection/err_injection.o 00:03:20.152 CC test/nvme/aer/aer.o 00:03:20.152 CC test/nvme/simple_copy/simple_copy.o 00:03:20.152 CC test/nvme/connect_stress/connect_stress.o 00:03:20.152 CC test/nvme/fdp/fdp.o 00:03:20.152 CC test/nvme/compliance/nvme_compliance.o 00:03:20.152 CC test/nvme/cuse/cuse.o 00:03:20.152 CC test/nvme/startup/startup.o 00:03:20.152 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.152 CC test/accel/dif/dif.o 00:03:20.152 CC test/blobfs/mkfs/mkfs.o 00:03:20.152 CC examples/nvme/arbitration/arbitration.o 00:03:20.152 CC examples/nvme/reconnect/reconnect.o 00:03:20.152 CC examples/nvme/abort/abort.o 00:03:20.152 CC examples/nvme/hello_world/hello_world.o 00:03:20.152 CC examples/nvme/hotplug/hotplug.o 00:03:20.152 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.152 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.152 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.410 CC test/lvol/esnap/esnap.o 00:03:20.410 CC examples/accel/perf/accel_perf.o 00:03:20.410 LINK boot_partition 00:03:20.410 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.410 LINK fused_ordering 00:03:20.410 CC examples/blob/cli/blobcli.o 00:03:20.410 LINK reserve 00:03:20.410 LINK connect_stress 00:03:20.410 LINK err_injection 00:03:20.410 LINK startup 00:03:20.410 CC examples/blob/hello_world/hello_blob.o 00:03:20.410 LINK mkfs 00:03:20.410 LINK reset 00:03:20.410 LINK doorbell_aers 00:03:20.410 LINK simple_copy 00:03:20.410 LINK nvme_dp 00:03:20.410 LINK pmr_persistence 00:03:20.410 LINK cmb_copy 00:03:20.410 LINK sgl 00:03:20.410 LINK hello_world 00:03:20.410 LINK aer 00:03:20.410 LINK hotplug 00:03:20.410 LINK overhead 00:03:20.410 LINK nvme_compliance 00:03:20.410 LINK fdp 00:03:20.668 LINK arbitration 00:03:20.668 LINK reconnect 00:03:20.668 LINK abort 00:03:20.668 LINK iscsi_fuzz 00:03:20.668 LINK hello_blob 00:03:20.668 LINK hello_fsdev 00:03:20.668 LINK nvme_manage 00:03:20.668 LINK accel_perf 00:03:20.668 LINK dif 00:03:20.927 LINK blobcli 00:03:21.186 LINK cuse 00:03:21.186 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.186 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.445 CC test/bdev/bdevio/bdevio.o 00:03:21.445 LINK hello_bdev 00:03:21.703 LINK bdevio 00:03:21.962 LINK bdevperf 00:03:22.530 CC examples/nvmf/nvmf/nvmf.o 00:03:22.789 LINK nvmf 00:03:23.725 LINK esnap 00:03:23.983 00:03:23.983 real 0m55.617s 00:03:23.983 user 6m47.698s 00:03:23.983 sys 2m55.562s 00:03:23.983 12:43:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:23.983 12:43:31 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.983 ************************************ 00:03:23.983 END TEST make 00:03:23.983 ************************************ 00:03:24.242 12:43:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.243 12:43:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.243 12:43:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.243 12:43:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.243 12:43:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.243 12:43:31 -- pm/common@44 -- $ pid=675185 00:03:24.243 12:43:31 -- pm/common@50 -- $ kill -TERM 675185 00:03:24.243 12:43:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.243 12:43:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.243 12:43:31 -- pm/common@44 -- $ pid=675186 00:03:24.243 12:43:31 -- pm/common@50 -- $ kill -TERM 675186 00:03:24.243 12:43:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.243 12:43:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.243 12:43:31 -- pm/common@44 -- $ pid=675188 00:03:24.243 12:43:31 -- pm/common@50 -- $ kill -TERM 675188 00:03:24.243 12:43:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.243 12:43:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.243 12:43:31 -- pm/common@44 -- $ pid=675215 00:03:24.243 12:43:31 -- pm/common@50 -- $ sudo -E kill -TERM 675215 00:03:24.243 12:43:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.243 12:43:31 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:24.243 12:43:32 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.243 12:43:32 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.243 12:43:32 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:24.243 12:43:32 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:24.243 12:43:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.243 12:43:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.243 12:43:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.243 12:43:32 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.243 12:43:32 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.243 12:43:32 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.243 12:43:32 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.243 12:43:32 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.243 12:43:32 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.243 12:43:32 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.243 12:43:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.243 12:43:32 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.243 12:43:32 -- scripts/common.sh@345 -- # : 1 00:03:24.243 12:43:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.243 12:43:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.243 12:43:32 -- scripts/common.sh@365 -- # decimal 1 00:03:24.243 12:43:32 -- scripts/common.sh@353 -- # local d=1 00:03:24.243 12:43:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.243 12:43:32 -- scripts/common.sh@355 -- # echo 1 00:03:24.243 12:43:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.243 12:43:32 -- scripts/common.sh@366 -- # decimal 2 00:03:24.243 12:43:32 -- scripts/common.sh@353 -- # local d=2 00:03:24.243 12:43:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.243 12:43:32 -- scripts/common.sh@355 -- # echo 2 00:03:24.243 12:43:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.243 12:43:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.243 12:43:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.243 12:43:32 -- scripts/common.sh@368 -- # return 0 00:03:24.243 12:43:32 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.243 12:43:32 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.243 --rc genhtml_branch_coverage=1 00:03:24.243 --rc genhtml_function_coverage=1 00:03:24.243 --rc genhtml_legend=1 00:03:24.243 --rc geninfo_all_blocks=1 00:03:24.243 --rc geninfo_unexecuted_blocks=1 00:03:24.243 00:03:24.243 ' 00:03:24.243 12:43:32 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.243 --rc genhtml_branch_coverage=1 00:03:24.243 --rc genhtml_function_coverage=1 00:03:24.243 --rc genhtml_legend=1 00:03:24.243 --rc geninfo_all_blocks=1 00:03:24.243 --rc geninfo_unexecuted_blocks=1 00:03:24.243 00:03:24.243 ' 00:03:24.243 12:43:32 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.243 --rc genhtml_branch_coverage=1 00:03:24.243 --rc genhtml_function_coverage=1 00:03:24.243 --rc genhtml_legend=1 00:03:24.243 --rc geninfo_all_blocks=1 00:03:24.243 --rc geninfo_unexecuted_blocks=1 00:03:24.243 00:03:24.243 ' 00:03:24.243 12:43:32 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.243 --rc genhtml_branch_coverage=1 00:03:24.243 --rc genhtml_function_coverage=1 00:03:24.243 --rc genhtml_legend=1 00:03:24.243 --rc geninfo_all_blocks=1 00:03:24.243 --rc geninfo_unexecuted_blocks=1 00:03:24.243 00:03:24.243 ' 00:03:24.243 12:43:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:24.243 12:43:32 -- nvmf/common.sh@7 -- # uname -s 00:03:24.243 12:43:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.243 12:43:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.243 12:43:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.243 12:43:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.243 12:43:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.243 12:43:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.243 12:43:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.243 12:43:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.243 12:43:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.243 12:43:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.502 12:43:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:24.502 12:43:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:24.502 12:43:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.502 12:43:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.502 12:43:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:24.502 12:43:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.502 12:43:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:24.502 12:43:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.502 12:43:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.502 12:43:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.502 12:43:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.502 12:43:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.502 12:43:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.502 12:43:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.502 12:43:32 -- paths/export.sh@5 -- # export PATH 00:03:24.502 12:43:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.502 12:43:32 -- nvmf/common.sh@51 -- # : 0 00:03:24.502 12:43:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.502 12:43:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.502 12:43:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.502 12:43:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.502 12:43:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.502 12:43:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.502 12:43:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.502 12:43:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.502 12:43:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.502 12:43:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.502 12:43:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.502 12:43:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.502 12:43:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.502 12:43:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.502 12:43:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.502 12:43:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.502 12:43:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.503 12:43:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.503 12:43:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.503 12:43:32 -- spdk/autotest.sh@48 -- # udevadm_pid=755841 00:03:24.503 12:43:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.503 12:43:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.503 12:43:32 -- pm/common@17 -- # local monitor 00:03:24.503 12:43:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.503 12:43:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.503 12:43:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.503 12:43:32 -- pm/common@21 -- # date +%s 00:03:24.503 12:43:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.503 12:43:32 -- pm/common@21 -- # date +%s 00:03:24.503 12:43:32 -- pm/common@25 -- # sleep 1 00:03:24.503 12:43:32 -- pm/common@21 -- # date +%s 00:03:24.503 12:43:32 -- pm/common@21 -- # date +%s 00:03:24.503 12:43:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734263012 00:03:24.503 12:43:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734263012 00:03:24.503 12:43:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734263012 00:03:24.503 12:43:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734263012 00:03:24.503 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734263012_collect-cpu-load.pm.log 00:03:24.503 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734263012_collect-vmstat.pm.log 00:03:24.503 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734263012_collect-cpu-temp.pm.log 00:03:24.503 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734263012_collect-bmc-pm.bmc.pm.log 00:03:25.440 12:43:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.440 12:43:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.440 12:43:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.440 12:43:33 -- common/autotest_common.sh@10 -- # set +x 00:03:25.440 12:43:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.440 12:43:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:25.440 12:43:33 -- common/autotest_common.sh@10 -- # set +x 00:03:25.440 12:43:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:25.440 12:43:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.440 12:43:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.440 12:43:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:25.440 12:43:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.440 12:43:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.440 12:43:33 -- common/autotest_common.sh@1457 -- # uname 00:03:25.440 12:43:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:25.440 12:43:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.440 12:43:33 -- common/autotest_common.sh@1477 -- # uname 00:03:25.440 12:43:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:25.440 12:43:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:25.440 12:43:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.440 lcov: LCOV version 1.15 00:03:25.440 12:43:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:43.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.545 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.662 12:43:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:51.662 12:43:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.662 12:43:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.662 12:43:58 -- spdk/autotest.sh@78 -- # rm -f 00:03:51.662 12:43:58 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.040 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:53.040 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.040 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.040 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.040 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.040 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:53.299 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.558 12:44:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.558 12:44:01 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.558 12:44:01 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.558 12:44:01 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:53.558 12:44:01 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:53.558 12:44:01 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:53.558 12:44:01 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:53.558 12:44:01 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:53.558 12:44:01 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:53.558 12:44:01 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:53.558 12:44:01 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.558 12:44:01 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.558 12:44:01 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.558 12:44:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.558 12:44:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.558 12:44:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.558 12:44:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.558 12:44:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.558 12:44:01 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.558 No valid GPT data, bailing 00:03:53.558 12:44:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.558 12:44:01 -- scripts/common.sh@394 -- # pt= 00:03:53.558 12:44:01 -- scripts/common.sh@395 -- # return 1 00:03:53.558 12:44:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.558 1+0 records in 00:03:53.558 1+0 records out 00:03:53.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00182789 s, 574 MB/s 00:03:53.558 12:44:01 -- spdk/autotest.sh@105 -- # sync 00:03:53.558 12:44:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.558 12:44:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.558 12:44:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.830 12:44:06 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.830 12:44:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.830 12:44:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.830 12:44:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:02.119 Hugepages 00:04:02.119 node hugesize free / total 00:04:02.119 node0 1048576kB 0 / 0 00:04:02.119 node0 2048kB 0 / 0 00:04:02.119 node1 1048576kB 0 / 0 00:04:02.119 node1 2048kB 0 / 0 00:04:02.119 00:04:02.119 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.119 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:02.119 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:02.119 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:02.119 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:02.120 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:02.120 12:44:09 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.120 12:44:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.120 12:44:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.120 12:44:09 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.654 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.654 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.913 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.481 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.739 12:44:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.676 12:44:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.676 12:44:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.676 12:44:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.677 12:44:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.677 12:44:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.677 12:44:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.677 12:44:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.677 12:44:14 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.677 12:44:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.677 12:44:14 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.677 12:44:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:06.677 12:44:14 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.966 Waiting for block devices as requested 00:04:09.966 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:09.966 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.966 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.966 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.966 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.966 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.966 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.226 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.226 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:10.226 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:10.485 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:10.485 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.485 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.744 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.744 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.744 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.744 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:11.003 12:44:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.003 12:44:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:11.003 12:44:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:11.003 12:44:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:11.003 12:44:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.003 12:44:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.003 12:44:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:11.003 12:44:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.003 12:44:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.003 12:44:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:11.003 12:44:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.004 12:44:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.004 12:44:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.004 12:44:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.004 12:44:18 -- common/autotest_common.sh@1543 -- # continue 00:04:11.004 12:44:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:11.004 12:44:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.004 12:44:18 -- common/autotest_common.sh@10 -- # set +x 00:04:11.004 12:44:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:11.004 12:44:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.004 12:44:18 -- common/autotest_common.sh@10 -- # set +x 00:04:11.004 12:44:18 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.293 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.293 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.861 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.861 12:44:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:14.861 12:44:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.861 12:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.861 12:44:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:14.861 12:44:22 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:14.861 12:44:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.861 12:44:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:14.861 12:44:22 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:14.861 12:44:22 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:14.861 12:44:22 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:14.861 12:44:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:14.861 12:44:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.861 12:44:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.861 12:44:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.861 12:44:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.861 12:44:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.120 12:44:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:15.120 12:44:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:15.120 12:44:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.120 12:44:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:15.120 12:44:22 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:15.120 12:44:22 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:15.120 12:44:22 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:15.120 12:44:22 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:15.120 12:44:22 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:15.120 12:44:22 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:15.120 12:44:22 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=770030 00:04:15.120 12:44:22 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.120 12:44:22 -- common/autotest_common.sh@1585 -- # waitforlisten 770030 00:04:15.120 12:44:22 -- common/autotest_common.sh@835 -- # '[' -z 770030 ']' 00:04:15.120 12:44:22 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.120 12:44:22 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.120 12:44:22 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.120 12:44:22 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.120 12:44:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.120 [2024-12-15 12:44:22.856937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:15.120 [2024-12-15 12:44:22.856983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770030 ] 00:04:15.120 [2024-12-15 12:44:22.934060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.120 [2024-12-15 12:44:22.956593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.378 12:44:23 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.378 12:44:23 -- common/autotest_common.sh@868 -- # return 0 00:04:15.378 12:44:23 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:15.378 12:44:23 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:15.378 12:44:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:18.665 nvme0n1 00:04:18.665 12:44:26 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:18.665 [2024-12-15 12:44:26.331717] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:18.665 [2024-12-15 12:44:26.331747] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:18.665 request: 00:04:18.665 { 00:04:18.665 "nvme_ctrlr_name": "nvme0", 00:04:18.665 "password": "test", 00:04:18.665 "method": "bdev_nvme_opal_revert", 00:04:18.665 "req_id": 1 00:04:18.665 } 00:04:18.665 Got JSON-RPC error response 00:04:18.665 response: 00:04:18.665 { 00:04:18.665 "code": -32603, 00:04:18.665 "message": "Internal error" 00:04:18.665 } 00:04:18.665 12:44:26 -- common/autotest_common.sh@1591 -- # true 00:04:18.665 12:44:26 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:18.665 12:44:26 -- common/autotest_common.sh@1595 -- # killprocess 770030 00:04:18.665 12:44:26 -- common/autotest_common.sh@954 -- # '[' -z 770030 ']' 00:04:18.665 12:44:26 -- common/autotest_common.sh@958 -- # kill -0 770030 00:04:18.665 12:44:26 -- common/autotest_common.sh@959 -- # uname 00:04:18.665 12:44:26 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.665 12:44:26 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770030 00:04:18.665 12:44:26 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.665 12:44:26 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.665 12:44:26 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770030' 00:04:18.665 killing process with pid 770030 00:04:18.665 12:44:26 -- common/autotest_common.sh@973 -- # kill 770030 00:04:18.665 12:44:26 -- common/autotest_common.sh@978 -- # wait 770030 00:04:20.569 12:44:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.569 12:44:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.569 12:44:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.569 12:44:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.569 12:44:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.569 12:44:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.569 12:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.569 12:44:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.569 12:44:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.569 12:44:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.569 12:44:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.569 12:44:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.569 ************************************ 00:04:20.569 START TEST env 00:04:20.569 ************************************ 00:04:20.569 12:44:28 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.569 * Looking for test storage... 00:04:20.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.569 12:44:28 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.569 12:44:28 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.569 12:44:28 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.569 12:44:28 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.569 12:44:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.569 12:44:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.569 12:44:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.569 12:44:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.569 12:44:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.569 12:44:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.569 12:44:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.569 12:44:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.570 12:44:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.570 12:44:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.570 12:44:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.570 12:44:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.570 12:44:28 env -- scripts/common.sh@345 -- # : 1 00:04:20.570 12:44:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.570 12:44:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.570 12:44:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.570 12:44:28 env -- scripts/common.sh@353 -- # local d=1 00:04:20.570 12:44:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.570 12:44:28 env -- scripts/common.sh@355 -- # echo 1 00:04:20.570 12:44:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.570 12:44:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.570 12:44:28 env -- scripts/common.sh@353 -- # local d=2 00:04:20.570 12:44:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.570 12:44:28 env -- scripts/common.sh@355 -- # echo 2 00:04:20.570 12:44:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.570 12:44:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.570 12:44:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.570 12:44:28 env -- scripts/common.sh@368 -- # return 0 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.570 --rc genhtml_branch_coverage=1 00:04:20.570 --rc genhtml_function_coverage=1 00:04:20.570 --rc genhtml_legend=1 00:04:20.570 --rc geninfo_all_blocks=1 00:04:20.570 --rc geninfo_unexecuted_blocks=1 00:04:20.570 00:04:20.570 ' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.570 --rc genhtml_branch_coverage=1 00:04:20.570 --rc genhtml_function_coverage=1 00:04:20.570 --rc genhtml_legend=1 00:04:20.570 --rc geninfo_all_blocks=1 00:04:20.570 --rc geninfo_unexecuted_blocks=1 00:04:20.570 00:04:20.570 ' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.570 --rc genhtml_branch_coverage=1 00:04:20.570 --rc genhtml_function_coverage=1 00:04:20.570 --rc genhtml_legend=1 00:04:20.570 --rc geninfo_all_blocks=1 00:04:20.570 --rc geninfo_unexecuted_blocks=1 00:04:20.570 00:04:20.570 ' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.570 --rc genhtml_branch_coverage=1 00:04:20.570 --rc genhtml_function_coverage=1 00:04:20.570 --rc genhtml_legend=1 00:04:20.570 --rc geninfo_all_blocks=1 00:04:20.570 --rc geninfo_unexecuted_blocks=1 00:04:20.570 00:04:20.570 ' 00:04:20.570 12:44:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.570 12:44:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.570 ************************************ 00:04:20.570 START TEST env_memory 00:04:20.570 ************************************ 00:04:20.570 12:44:28 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.570 00:04:20.570 00:04:20.570 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.570 http://cunit.sourceforge.net/ 00:04:20.570 00:04:20.570 00:04:20.570 Suite: memory 00:04:20.570 Test: alloc and free memory map ...[2024-12-15 12:44:28.316699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.570 passed 00:04:20.570 Test: mem map translation ...[2024-12-15 12:44:28.335262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.570 [2024-12-15 12:44:28.335277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.570 [2024-12-15 12:44:28.335326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.570 [2024-12-15 12:44:28.335333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.570 passed 00:04:20.570 Test: mem map registration ...[2024-12-15 12:44:28.371362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.570 [2024-12-15 12:44:28.371374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.570 passed 00:04:20.570 Test: mem map adjacent registrations ...passed 00:04:20.570 00:04:20.570 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.570 suites 1 1 n/a 0 0 00:04:20.570 tests 4 4 4 0 0 00:04:20.570 asserts 152 152 152 0 n/a 00:04:20.570 00:04:20.570 Elapsed time = 0.133 seconds 00:04:20.570 00:04:20.570 real 0m0.146s 00:04:20.570 user 0m0.138s 00:04:20.570 sys 0m0.008s 00:04:20.570 12:44:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.570 12:44:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.570 ************************************ 00:04:20.570 END TEST env_memory 00:04:20.570 ************************************ 00:04:20.570 12:44:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.570 12:44:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.570 12:44:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.830 ************************************ 00:04:20.830 START TEST env_vtophys 00:04:20.830 ************************************ 00:04:20.830 12:44:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.830 EAL: lib.eal log level changed from notice to debug 00:04:20.830 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.830 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.830 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.830 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.830 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.830 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.830 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.830 EAL: Detected lcore 7 as core 8 on socket 0 00:04:20.830 EAL: Detected lcore 8 as core 9 on socket 0 00:04:20.830 EAL: Detected lcore 9 as core 10 on socket 0 00:04:20.830 EAL: Detected lcore 10 as core 11 on socket 0 00:04:20.830 EAL: Detected lcore 11 as core 12 on socket 0 00:04:20.830 EAL: Detected lcore 12 as core 13 on socket 0 00:04:20.830 EAL: Detected lcore 13 as core 16 on socket 0 00:04:20.830 EAL: Detected lcore 14 as core 17 on socket 0 00:04:20.830 EAL: Detected lcore 15 as core 18 on socket 0 00:04:20.830 EAL: Detected lcore 16 as core 19 on socket 0 00:04:20.830 EAL: Detected lcore 17 as core 20 on socket 0 00:04:20.830 EAL: Detected lcore 18 as core 21 on socket 0 00:04:20.830 EAL: Detected lcore 19 as core 25 on socket 0 00:04:20.830 EAL: Detected lcore 20 as core 26 on socket 0 00:04:20.830 EAL: Detected lcore 21 as core 27 on socket 0 00:04:20.830 EAL: Detected lcore 22 as core 28 on socket 0 00:04:20.830 EAL: Detected lcore 23 as core 29 on socket 0 00:04:20.830 EAL: Detected lcore 24 as core 0 on socket 1 00:04:20.830 EAL: Detected lcore 25 as core 1 on socket 1 00:04:20.830 EAL: Detected lcore 26 as core 2 on socket 1 00:04:20.830 EAL: Detected lcore 27 as core 3 on socket 1 00:04:20.830 EAL: Detected lcore 28 as core 4 on socket 1 00:04:20.830 EAL: Detected lcore 29 as core 5 on socket 1 00:04:20.830 EAL: Detected lcore 30 as core 6 on socket 1 00:04:20.830 EAL: Detected lcore 31 as core 8 on socket 1 00:04:20.830 EAL: Detected lcore 32 as core 9 on socket 1 00:04:20.830 EAL: Detected lcore 33 as core 10 on socket 1 00:04:20.831 EAL: Detected lcore 34 as core 11 on socket 1 00:04:20.831 EAL: Detected lcore 35 as core 12 on socket 1 00:04:20.831 EAL: Detected lcore 36 as core 13 on socket 1 00:04:20.831 EAL: Detected lcore 37 as core 16 on socket 1 00:04:20.831 EAL: Detected lcore 38 as core 17 on socket 1 00:04:20.831 EAL: Detected lcore 39 as core 18 on socket 1 00:04:20.831 EAL: Detected lcore 40 as core 19 on socket 1 00:04:20.831 EAL: Detected lcore 41 as core 20 on socket 1 00:04:20.831 EAL: Detected lcore 42 as core 21 on socket 1 00:04:20.831 EAL: Detected lcore 43 as core 25 on socket 1 00:04:20.831 EAL: Detected lcore 44 as core 26 on socket 1 00:04:20.831 EAL: Detected lcore 45 as core 27 on socket 1 00:04:20.831 EAL: Detected lcore 46 as core 28 on socket 1 00:04:20.831 EAL: Detected lcore 47 as core 29 on socket 1 00:04:20.831 EAL: Detected lcore 48 as core 0 on socket 0 00:04:20.831 EAL: Detected lcore 49 as core 1 on socket 0 00:04:20.831 EAL: Detected lcore 50 as core 2 on socket 0 00:04:20.831 EAL: Detected lcore 51 as core 3 on socket 0 00:04:20.831 EAL: Detected lcore 52 as core 4 on socket 0 00:04:20.831 EAL: Detected lcore 53 as core 5 on socket 0 00:04:20.831 EAL: Detected lcore 54 as core 6 on socket 0 00:04:20.831 EAL: Detected lcore 55 as core 8 on socket 0 00:04:20.831 EAL: Detected lcore 56 as core 9 on socket 0 00:04:20.831 EAL: Detected lcore 57 as core 10 on socket 0 00:04:20.831 EAL: Detected lcore 58 as core 11 on socket 0 00:04:20.831 EAL: Detected lcore 59 as core 12 on socket 0 00:04:20.831 EAL: Detected lcore 60 as core 13 on socket 0 00:04:20.831 EAL: Detected lcore 61 as core 16 on socket 0 00:04:20.831 EAL: Detected lcore 62 as core 17 on socket 0 00:04:20.831 EAL: Detected lcore 63 as core 18 on socket 0 00:04:20.831 EAL: Detected lcore 64 as core 19 on socket 0 00:04:20.831 EAL: Detected lcore 65 as core 20 on socket 0 00:04:20.831 EAL: Detected lcore 66 as core 21 on socket 0 00:04:20.831 EAL: Detected lcore 67 as core 25 on socket 0 00:04:20.831 EAL: Detected lcore 68 as core 26 on socket 0 00:04:20.831 EAL: Detected lcore 69 as core 27 on socket 0 00:04:20.831 EAL: Detected lcore 70 as core 28 on socket 0 00:04:20.831 EAL: Detected lcore 71 as core 29 on socket 0 00:04:20.831 EAL: Detected lcore 72 as core 0 on socket 1 00:04:20.831 EAL: Detected lcore 73 as core 1 on socket 1 00:04:20.831 EAL: Detected lcore 74 as core 2 on socket 1 00:04:20.831 EAL: Detected lcore 75 as core 3 on socket 1 00:04:20.831 EAL: Detected lcore 76 as core 4 on socket 1 00:04:20.831 EAL: Detected lcore 77 as core 5 on socket 1 00:04:20.831 EAL: Detected lcore 78 as core 6 on socket 1 00:04:20.831 EAL: Detected lcore 79 as core 8 on socket 1 00:04:20.831 EAL: Detected lcore 80 as core 9 on socket 1 00:04:20.831 EAL: Detected lcore 81 as core 10 on socket 1 00:04:20.831 EAL: Detected lcore 82 as core 11 on socket 1 00:04:20.831 EAL: Detected lcore 83 as core 12 on socket 1 00:04:20.831 EAL: Detected lcore 84 as core 13 on socket 1 00:04:20.831 EAL: Detected lcore 85 as core 16 on socket 1 00:04:20.831 EAL: Detected lcore 86 as core 17 on socket 1 00:04:20.831 EAL: Detected lcore 87 as core 18 on socket 1 00:04:20.831 EAL: Detected lcore 88 as core 19 on socket 1 00:04:20.831 EAL: Detected lcore 89 as core 20 on socket 1 00:04:20.831 EAL: Detected lcore 90 as core 21 on socket 1 00:04:20.831 EAL: Detected lcore 91 as core 25 on socket 1 00:04:20.831 EAL: Detected lcore 92 as core 26 on socket 1 00:04:20.831 EAL: Detected lcore 93 as core 27 on socket 1 00:04:20.831 EAL: Detected lcore 94 as core 28 on socket 1 00:04:20.831 EAL: Detected lcore 95 as core 29 on socket 1 00:04:20.831 EAL: Maximum logical cores by configuration: 128 00:04:20.831 EAL: Detected CPU lcores: 96 00:04:20.831 EAL: Detected NUMA nodes: 2 00:04:20.831 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:20.831 EAL: Detected shared linkage of DPDK 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:20.831 EAL: Registered [vdev] bus. 00:04:20.831 EAL: bus.vdev log level changed from disabled to notice 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:20.831 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:20.831 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:20.831 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:20.831 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.831 EAL: No shared files mode enabled, IPC is disabled 00:04:20.831 EAL: Bus pci wants IOVA as 'DC' 00:04:20.831 EAL: Bus vdev wants IOVA as 'DC' 00:04:20.831 EAL: Buses did not request a specific IOVA mode. 00:04:20.831 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.831 EAL: Selected IOVA mode 'VA' 00:04:20.831 EAL: Probing VFIO support... 00:04:20.831 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.831 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.831 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.831 EAL: VFIO support initialized 00:04:20.831 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.831 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.831 EAL: Setting up physically contiguous memory... 00:04:20.831 EAL: Setting maximum number of open files to 524288 00:04:20.831 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.831 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.831 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.831 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.831 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.831 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.831 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.831 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.831 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.831 EAL: Hugepages will be freed exactly as allocated. 00:04:20.831 EAL: No shared files mode enabled, IPC is disabled 00:04:20.831 EAL: No shared files mode enabled, IPC is disabled 00:04:20.831 EAL: TSC frequency is ~2100000 KHz 00:04:20.831 EAL: Main lcore 0 is ready (tid=7f1dffe54a00;cpuset=[0]) 00:04:20.831 EAL: Trying to obtain current memory policy. 00:04:20.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.831 EAL: Restoring previous memory policy: 0 00:04:20.831 EAL: request: mp_malloc_sync 00:04:20.831 EAL: No shared files mode enabled, IPC is disabled 00:04:20.831 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.831 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:20.831 EAL: probe driver: 8086:37d2 net_i40e 00:04:20.831 EAL: Not managed by a supported kernel driver, skipped 00:04:20.832 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:20.832 EAL: probe driver: 8086:37d2 net_i40e 00:04:20.832 EAL: Not managed by a supported kernel driver, skipped 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.832 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.832 00:04:20.832 00:04:20.832 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.832 http://cunit.sourceforge.net/ 00:04:20.832 00:04:20.832 00:04:20.832 Suite: components_suite 00:04:20.832 Test: vtophys_malloc_test ...passed 00:04:20.832 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.832 EAL: Trying to obtain current memory policy. 00:04:20.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.832 EAL: Restoring previous memory policy: 4 00:04:20.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.832 EAL: request: mp_malloc_sync 00:04:20.832 EAL: No shared files mode enabled, IPC is disabled 00:04:20.832 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.091 EAL: request: mp_malloc_sync 00:04:21.091 EAL: No shared files mode enabled, IPC is disabled 00:04:21.091 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.091 EAL: Trying to obtain current memory policy. 00:04:21.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.091 EAL: Restoring previous memory policy: 4 00:04:21.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.091 EAL: request: mp_malloc_sync 00:04:21.091 EAL: No shared files mode enabled, IPC is disabled 00:04:21.091 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.350 EAL: request: mp_malloc_sync 00:04:21.350 EAL: No shared files mode enabled, IPC is disabled 00:04:21.350 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.350 EAL: Trying to obtain current memory policy. 00:04:21.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.608 EAL: Restoring previous memory policy: 4 00:04:21.608 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.608 EAL: request: mp_malloc_sync 00:04:21.608 EAL: No shared files mode enabled, IPC is disabled 00:04:21.608 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.608 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.867 EAL: request: mp_malloc_sync 00:04:21.867 EAL: No shared files mode enabled, IPC is disabled 00:04:21.867 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.867 passed 00:04:21.867 00:04:21.867 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.867 suites 1 1 n/a 0 0 00:04:21.867 tests 2 2 2 0 0 00:04:21.867 asserts 497 497 497 0 n/a 00:04:21.867 00:04:21.867 Elapsed time = 0.968 seconds 00:04:21.867 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.867 EAL: request: mp_malloc_sync 00:04:21.867 EAL: No shared files mode enabled, IPC is disabled 00:04:21.867 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.867 EAL: No shared files mode enabled, IPC is disabled 00:04:21.867 EAL: No shared files mode enabled, IPC is disabled 00:04:21.867 EAL: No shared files mode enabled, IPC is disabled 00:04:21.867 00:04:21.867 real 0m1.100s 00:04:21.867 user 0m0.648s 00:04:21.867 sys 0m0.422s 00:04:21.867 12:44:29 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.867 12:44:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.867 ************************************ 00:04:21.867 END TEST env_vtophys 00:04:21.867 ************************************ 00:04:21.867 12:44:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.867 12:44:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.867 12:44:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.867 12:44:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.867 ************************************ 00:04:21.867 START TEST env_pci 00:04:21.867 ************************************ 00:04:21.867 12:44:29 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.867 00:04:21.867 00:04:21.867 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.867 http://cunit.sourceforge.net/ 00:04:21.867 00:04:21.867 00:04:21.867 Suite: pci 00:04:21.867 Test: pci_hook ...[2024-12-15 12:44:29.674623] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 771294 has claimed it 00:04:21.867 EAL: Cannot find device (10000:00:01.0) 00:04:21.867 EAL: Failed to attach device on primary process 00:04:21.867 passed 00:04:21.867 00:04:21.867 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.867 suites 1 1 n/a 0 0 00:04:21.867 tests 1 1 1 0 0 00:04:21.868 asserts 25 25 25 0 n/a 00:04:21.868 00:04:21.868 Elapsed time = 0.027 seconds 00:04:21.868 00:04:21.868 real 0m0.046s 00:04:21.868 user 0m0.016s 00:04:21.868 sys 0m0.030s 00:04:21.868 12:44:29 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.868 12:44:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.868 ************************************ 00:04:21.868 END TEST env_pci 00:04:21.868 ************************************ 00:04:21.868 12:44:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.868 12:44:29 env -- env/env.sh@15 -- # uname 00:04:21.868 12:44:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.868 12:44:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.868 12:44:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.868 12:44:29 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:21.868 12:44:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.868 12:44:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.127 ************************************ 00:04:22.127 START TEST env_dpdk_post_init 00:04:22.127 ************************************ 00:04:22.127 12:44:29 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.127 EAL: Detected CPU lcores: 96 00:04:22.127 EAL: Detected NUMA nodes: 2 00:04:22.127 EAL: Detected shared linkage of DPDK 00:04:22.127 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.127 EAL: Selected IOVA mode 'VA' 00:04:22.127 EAL: VFIO support initialized 00:04:22.127 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.127 EAL: Using IOMMU type 1 (Type 1) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:22.127 EAL: Ignore mapping IO port bar(1) 00:04:22.127 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:23.064 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:23.064 EAL: Ignore mapping IO port bar(1) 00:04:23.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:26.352 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:26.352 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:26.352 Starting DPDK initialization... 00:04:26.352 Starting SPDK post initialization... 00:04:26.352 SPDK NVMe probe 00:04:26.352 Attaching to 0000:5e:00.0 00:04:26.352 Attached to 0000:5e:00.0 00:04:26.352 Cleaning up... 00:04:26.352 00:04:26.352 real 0m4.313s 00:04:26.352 user 0m3.249s 00:04:26.352 sys 0m0.139s 00:04:26.352 12:44:34 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.352 12:44:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.352 ************************************ 00:04:26.352 END TEST env_dpdk_post_init 00:04:26.352 ************************************ 00:04:26.352 12:44:34 env -- env/env.sh@26 -- # uname 00:04:26.352 12:44:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.352 12:44:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.352 12:44:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.352 12:44:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.352 12:44:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.352 ************************************ 00:04:26.352 START TEST env_mem_callbacks 00:04:26.352 ************************************ 00:04:26.352 12:44:34 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.352 EAL: Detected CPU lcores: 96 00:04:26.352 EAL: Detected NUMA nodes: 2 00:04:26.352 EAL: Detected shared linkage of DPDK 00:04:26.352 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.352 EAL: Selected IOVA mode 'VA' 00:04:26.352 EAL: VFIO support initialized 00:04:26.352 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.352 00:04:26.352 00:04:26.352 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.352 http://cunit.sourceforge.net/ 00:04:26.352 00:04:26.352 00:04:26.352 Suite: memory 00:04:26.352 Test: test ... 00:04:26.352 register 0x200000200000 2097152 00:04:26.352 malloc 3145728 00:04:26.352 register 0x200000400000 4194304 00:04:26.352 buf 0x200000500000 len 3145728 PASSED 00:04:26.352 malloc 64 00:04:26.352 buf 0x2000004fff40 len 64 PASSED 00:04:26.352 malloc 4194304 00:04:26.352 register 0x200000800000 6291456 00:04:26.352 buf 0x200000a00000 len 4194304 PASSED 00:04:26.352 free 0x200000500000 3145728 00:04:26.352 free 0x2000004fff40 64 00:04:26.352 unregister 0x200000400000 4194304 PASSED 00:04:26.352 free 0x200000a00000 4194304 00:04:26.352 unregister 0x200000800000 6291456 PASSED 00:04:26.352 malloc 8388608 00:04:26.352 register 0x200000400000 10485760 00:04:26.352 buf 0x200000600000 len 8388608 PASSED 00:04:26.352 free 0x200000600000 8388608 00:04:26.352 unregister 0x200000400000 10485760 PASSED 00:04:26.352 passed 00:04:26.352 00:04:26.352 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.352 suites 1 1 n/a 0 0 00:04:26.352 tests 1 1 1 0 0 00:04:26.352 asserts 15 15 15 0 n/a 00:04:26.352 00:04:26.352 Elapsed time = 0.008 seconds 00:04:26.352 00:04:26.352 real 0m0.061s 00:04:26.352 user 0m0.019s 00:04:26.352 sys 0m0.042s 00:04:26.352 12:44:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.352 12:44:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.352 ************************************ 00:04:26.352 END TEST env_mem_callbacks 00:04:26.352 ************************************ 00:04:26.612 00:04:26.612 real 0m6.198s 00:04:26.612 user 0m4.306s 00:04:26.612 sys 0m0.969s 00:04:26.612 12:44:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.612 12:44:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.612 ************************************ 00:04:26.612 END TEST env 00:04:26.612 ************************************ 00:04:26.612 12:44:34 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.612 12:44:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.612 12:44:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.612 12:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:26.612 ************************************ 00:04:26.612 START TEST rpc 00:04:26.612 ************************************ 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.612 * Looking for test storage... 00:04:26.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.612 12:44:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.612 12:44:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.612 12:44:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.612 12:44:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.612 12:44:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.612 12:44:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.612 12:44:34 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.612 12:44:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.612 12:44:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.612 12:44:34 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.612 12:44:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.612 12:44:34 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.612 12:44:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.612 12:44:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.612 12:44:34 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.612 --rc genhtml_branch_coverage=1 00:04:26.612 --rc genhtml_function_coverage=1 00:04:26.612 --rc genhtml_legend=1 00:04:26.612 --rc geninfo_all_blocks=1 00:04:26.612 --rc geninfo_unexecuted_blocks=1 00:04:26.612 00:04:26.612 ' 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.612 --rc genhtml_branch_coverage=1 00:04:26.612 --rc genhtml_function_coverage=1 00:04:26.612 --rc genhtml_legend=1 00:04:26.612 --rc geninfo_all_blocks=1 00:04:26.612 --rc geninfo_unexecuted_blocks=1 00:04:26.612 00:04:26.612 ' 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.612 --rc genhtml_branch_coverage=1 00:04:26.612 --rc genhtml_function_coverage=1 00:04:26.612 --rc genhtml_legend=1 00:04:26.612 --rc geninfo_all_blocks=1 00:04:26.612 --rc geninfo_unexecuted_blocks=1 00:04:26.612 00:04:26.612 ' 00:04:26.612 12:44:34 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.612 --rc genhtml_branch_coverage=1 00:04:26.612 --rc genhtml_function_coverage=1 00:04:26.612 --rc genhtml_legend=1 00:04:26.612 --rc geninfo_all_blocks=1 00:04:26.612 --rc geninfo_unexecuted_blocks=1 00:04:26.612 00:04:26.612 ' 00:04:26.871 12:44:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772118 00:04:26.872 12:44:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.872 12:44:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.872 12:44:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772118 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 772118 ']' 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.872 12:44:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.872 [2024-12-15 12:44:34.572390] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:26.872 [2024-12-15 12:44:34.572433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772118 ] 00:04:26.872 [2024-12-15 12:44:34.645260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.872 [2024-12-15 12:44:34.667455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.872 [2024-12-15 12:44:34.667489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772118' to capture a snapshot of events at runtime. 00:04:26.872 [2024-12-15 12:44:34.667496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.872 [2024-12-15 12:44:34.667502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.872 [2024-12-15 12:44:34.667506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772118 for offline analysis/debug. 00:04:26.872 [2024-12-15 12:44:34.668023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.130 12:44:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.130 12:44:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.130 12:44:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.130 12:44:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.130 12:44:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.130 12:44:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.130 12:44:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.130 12:44:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.130 12:44:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.130 ************************************ 00:04:27.130 START TEST rpc_integrity 00:04:27.130 ************************************ 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.130 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.130 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.131 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.131 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.131 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.131 12:44:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.131 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.131 { 00:04:27.131 "name": "Malloc0", 00:04:27.131 "aliases": [ 00:04:27.131 "a4687b83-5df2-4edc-b885-bff4d965080c" 00:04:27.131 ], 00:04:27.131 "product_name": "Malloc disk", 00:04:27.131 "block_size": 512, 00:04:27.131 "num_blocks": 16384, 00:04:27.131 "uuid": "a4687b83-5df2-4edc-b885-bff4d965080c", 00:04:27.131 "assigned_rate_limits": { 00:04:27.131 "rw_ios_per_sec": 0, 00:04:27.131 "rw_mbytes_per_sec": 0, 00:04:27.131 "r_mbytes_per_sec": 0, 00:04:27.131 "w_mbytes_per_sec": 0 00:04:27.131 }, 00:04:27.131 "claimed": false, 00:04:27.131 "zoned": false, 00:04:27.131 "supported_io_types": { 00:04:27.131 "read": true, 00:04:27.131 "write": true, 00:04:27.131 "unmap": true, 00:04:27.131 "flush": true, 00:04:27.131 "reset": true, 00:04:27.131 "nvme_admin": false, 00:04:27.131 "nvme_io": false, 00:04:27.131 "nvme_io_md": false, 00:04:27.131 "write_zeroes": true, 00:04:27.131 "zcopy": true, 00:04:27.131 "get_zone_info": false, 00:04:27.131 "zone_management": false, 00:04:27.131 "zone_append": false, 00:04:27.131 "compare": false, 00:04:27.131 "compare_and_write": false, 00:04:27.131 "abort": true, 00:04:27.131 "seek_hole": false, 00:04:27.131 "seek_data": false, 00:04:27.131 "copy": true, 00:04:27.131 "nvme_iov_md": false 00:04:27.131 }, 00:04:27.131 "memory_domains": [ 00:04:27.131 { 00:04:27.131 "dma_device_id": "system", 00:04:27.131 "dma_device_type": 1 00:04:27.131 }, 00:04:27.131 { 00:04:27.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.131 "dma_device_type": 2 00:04:27.131 } 00:04:27.131 ], 00:04:27.131 "driver_specific": {} 00:04:27.131 } 00:04:27.131 ]' 00:04:27.131 12:44:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.131 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.131 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.131 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.131 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.131 [2024-12-15 12:44:35.035321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.131 [2024-12-15 12:44:35.035349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.131 [2024-12-15 12:44:35.035362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b8cae0 00:04:27.131 [2024-12-15 12:44:35.035368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.131 [2024-12-15 12:44:35.036422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.131 [2024-12-15 12:44:35.036442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.399 Passthru0 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.399 { 00:04:27.399 "name": "Malloc0", 00:04:27.399 "aliases": [ 00:04:27.399 "a4687b83-5df2-4edc-b885-bff4d965080c" 00:04:27.399 ], 00:04:27.399 "product_name": "Malloc disk", 00:04:27.399 "block_size": 512, 00:04:27.399 "num_blocks": 16384, 00:04:27.399 "uuid": "a4687b83-5df2-4edc-b885-bff4d965080c", 00:04:27.399 "assigned_rate_limits": { 00:04:27.399 "rw_ios_per_sec": 0, 00:04:27.399 "rw_mbytes_per_sec": 0, 00:04:27.399 "r_mbytes_per_sec": 0, 00:04:27.399 "w_mbytes_per_sec": 0 00:04:27.399 }, 00:04:27.399 "claimed": true, 00:04:27.399 "claim_type": "exclusive_write", 00:04:27.399 "zoned": false, 00:04:27.399 "supported_io_types": { 00:04:27.399 "read": true, 00:04:27.399 "write": true, 00:04:27.399 "unmap": true, 00:04:27.399 "flush": true, 00:04:27.399 "reset": true, 00:04:27.399 "nvme_admin": false, 00:04:27.399 "nvme_io": false, 00:04:27.399 "nvme_io_md": false, 00:04:27.399 "write_zeroes": true, 00:04:27.399 "zcopy": true, 00:04:27.399 "get_zone_info": false, 00:04:27.399 "zone_management": false, 00:04:27.399 "zone_append": false, 00:04:27.399 "compare": false, 00:04:27.399 "compare_and_write": false, 00:04:27.399 "abort": true, 00:04:27.399 "seek_hole": false, 00:04:27.399 "seek_data": false, 00:04:27.399 "copy": true, 00:04:27.399 "nvme_iov_md": false 00:04:27.399 }, 00:04:27.399 "memory_domains": [ 00:04:27.399 { 00:04:27.399 "dma_device_id": "system", 00:04:27.399 "dma_device_type": 1 00:04:27.399 }, 00:04:27.399 { 00:04:27.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.399 "dma_device_type": 2 00:04:27.399 } 00:04:27.399 ], 00:04:27.399 "driver_specific": {} 00:04:27.399 }, 00:04:27.399 { 00:04:27.399 "name": "Passthru0", 00:04:27.399 "aliases": [ 00:04:27.399 "8fd2768d-ab43-52b2-9acd-0a793258f398" 00:04:27.399 ], 00:04:27.399 "product_name": "passthru", 00:04:27.399 "block_size": 512, 00:04:27.399 "num_blocks": 16384, 00:04:27.399 "uuid": "8fd2768d-ab43-52b2-9acd-0a793258f398", 00:04:27.399 "assigned_rate_limits": { 00:04:27.399 "rw_ios_per_sec": 0, 00:04:27.399 "rw_mbytes_per_sec": 0, 00:04:27.399 "r_mbytes_per_sec": 0, 00:04:27.399 "w_mbytes_per_sec": 0 00:04:27.399 }, 00:04:27.399 "claimed": false, 00:04:27.399 "zoned": false, 00:04:27.399 "supported_io_types": { 00:04:27.399 "read": true, 00:04:27.399 "write": true, 00:04:27.399 "unmap": true, 00:04:27.399 "flush": true, 00:04:27.399 "reset": true, 00:04:27.399 "nvme_admin": false, 00:04:27.399 "nvme_io": false, 00:04:27.399 "nvme_io_md": false, 00:04:27.399 "write_zeroes": true, 00:04:27.399 "zcopy": true, 00:04:27.399 "get_zone_info": false, 00:04:27.399 "zone_management": false, 00:04:27.399 "zone_append": false, 00:04:27.399 "compare": false, 00:04:27.399 "compare_and_write": false, 00:04:27.399 "abort": true, 00:04:27.399 "seek_hole": false, 00:04:27.399 "seek_data": false, 00:04:27.399 "copy": true, 00:04:27.399 "nvme_iov_md": false 00:04:27.399 }, 00:04:27.399 "memory_domains": [ 00:04:27.399 { 00:04:27.399 "dma_device_id": "system", 00:04:27.399 "dma_device_type": 1 00:04:27.399 }, 00:04:27.399 { 00:04:27.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.399 "dma_device_type": 2 00:04:27.399 } 00:04:27.399 ], 00:04:27.399 "driver_specific": { 00:04:27.399 "passthru": { 00:04:27.399 "name": "Passthru0", 00:04:27.399 "base_bdev_name": "Malloc0" 00:04:27.399 } 00:04:27.399 } 00:04:27.399 } 00:04:27.399 ]' 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.399 12:44:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.399 00:04:27.399 real 0m0.277s 00:04:27.399 user 0m0.169s 00:04:27.399 sys 0m0.043s 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 ************************************ 00:04:27.399 END TEST rpc_integrity 00:04:27.399 ************************************ 00:04:27.399 12:44:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.399 12:44:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.399 12:44:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.399 12:44:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 ************************************ 00:04:27.399 START TEST rpc_plugins 00:04:27.399 ************************************ 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:27.399 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.399 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.399 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.399 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.399 { 00:04:27.399 "name": "Malloc1", 00:04:27.399 "aliases": [ 00:04:27.399 "6f1c8e42-508d-4d10-947d-641aaf305d0e" 00:04:27.399 ], 00:04:27.399 "product_name": "Malloc disk", 00:04:27.399 "block_size": 4096, 00:04:27.399 "num_blocks": 256, 00:04:27.399 "uuid": "6f1c8e42-508d-4d10-947d-641aaf305d0e", 00:04:27.399 "assigned_rate_limits": { 00:04:27.399 "rw_ios_per_sec": 0, 00:04:27.399 "rw_mbytes_per_sec": 0, 00:04:27.399 "r_mbytes_per_sec": 0, 00:04:27.399 "w_mbytes_per_sec": 0 00:04:27.399 }, 00:04:27.399 "claimed": false, 00:04:27.399 "zoned": false, 00:04:27.399 "supported_io_types": { 00:04:27.399 "read": true, 00:04:27.399 "write": true, 00:04:27.399 "unmap": true, 00:04:27.399 "flush": true, 00:04:27.399 "reset": true, 00:04:27.399 "nvme_admin": false, 00:04:27.399 "nvme_io": false, 00:04:27.399 "nvme_io_md": false, 00:04:27.399 "write_zeroes": true, 00:04:27.399 "zcopy": true, 00:04:27.399 "get_zone_info": false, 00:04:27.399 "zone_management": false, 00:04:27.399 "zone_append": false, 00:04:27.399 "compare": false, 00:04:27.399 "compare_and_write": false, 00:04:27.399 "abort": true, 00:04:27.399 "seek_hole": false, 00:04:27.399 "seek_data": false, 00:04:27.399 "copy": true, 00:04:27.399 "nvme_iov_md": false 00:04:27.399 }, 00:04:27.399 "memory_domains": [ 00:04:27.399 { 00:04:27.399 "dma_device_id": "system", 00:04:27.399 "dma_device_type": 1 00:04:27.399 }, 00:04:27.399 { 00:04:27.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.399 "dma_device_type": 2 00:04:27.399 } 00:04:27.399 ], 00:04:27.399 "driver_specific": {} 00:04:27.399 } 00:04:27.399 ]' 00:04:27.399 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.658 12:44:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.658 00:04:27.658 real 0m0.144s 00:04:27.658 user 0m0.090s 00:04:27.658 sys 0m0.017s 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.658 12:44:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.658 ************************************ 00:04:27.658 END TEST rpc_plugins 00:04:27.658 ************************************ 00:04:27.658 12:44:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.658 12:44:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.658 12:44:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.658 12:44:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.658 ************************************ 00:04:27.658 START TEST rpc_trace_cmd_test 00:04:27.658 ************************************ 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.658 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.658 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772118", 00:04:27.658 "tpoint_group_mask": "0x8", 00:04:27.658 "iscsi_conn": { 00:04:27.658 "mask": "0x2", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "scsi": { 00:04:27.658 "mask": "0x4", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "bdev": { 00:04:27.658 "mask": "0x8", 00:04:27.658 "tpoint_mask": "0xffffffffffffffff" 00:04:27.658 }, 00:04:27.658 "nvmf_rdma": { 00:04:27.658 "mask": "0x10", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "nvmf_tcp": { 00:04:27.658 "mask": "0x20", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "ftl": { 00:04:27.658 "mask": "0x40", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "blobfs": { 00:04:27.658 "mask": "0x80", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "dsa": { 00:04:27.658 "mask": "0x200", 00:04:27.658 "tpoint_mask": "0x0" 00:04:27.658 }, 00:04:27.658 "thread": { 00:04:27.658 "mask": "0x400", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "nvme_pcie": { 00:04:27.659 "mask": "0x800", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "iaa": { 00:04:27.659 "mask": "0x1000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "nvme_tcp": { 00:04:27.659 "mask": "0x2000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "bdev_nvme": { 00:04:27.659 "mask": "0x4000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "sock": { 00:04:27.659 "mask": "0x8000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "blob": { 00:04:27.659 "mask": "0x10000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "bdev_raid": { 00:04:27.659 "mask": "0x20000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 }, 00:04:27.659 "scheduler": { 00:04:27.659 "mask": "0x40000", 00:04:27.659 "tpoint_mask": "0x0" 00:04:27.659 } 00:04:27.659 }' 00:04:27.659 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.659 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.659 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.659 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.659 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.917 00:04:27.917 real 0m0.214s 00:04:27.917 user 0m0.187s 00:04:27.917 sys 0m0.020s 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.917 12:44:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.917 ************************************ 00:04:27.917 END TEST rpc_trace_cmd_test 00:04:27.917 ************************************ 00:04:27.917 12:44:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.917 12:44:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.917 12:44:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.917 12:44:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.917 12:44:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.917 12:44:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.917 ************************************ 00:04:27.917 START TEST rpc_daemon_integrity 00:04:27.917 ************************************ 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.917 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.176 { 00:04:28.176 "name": "Malloc2", 00:04:28.176 "aliases": [ 00:04:28.176 "a42879b1-e07e-4bb2-8c26-40bd754a41cf" 00:04:28.176 ], 00:04:28.176 "product_name": "Malloc disk", 00:04:28.176 "block_size": 512, 00:04:28.176 "num_blocks": 16384, 00:04:28.176 "uuid": "a42879b1-e07e-4bb2-8c26-40bd754a41cf", 00:04:28.176 "assigned_rate_limits": { 00:04:28.176 "rw_ios_per_sec": 0, 00:04:28.176 "rw_mbytes_per_sec": 0, 00:04:28.176 "r_mbytes_per_sec": 0, 00:04:28.176 "w_mbytes_per_sec": 0 00:04:28.176 }, 00:04:28.176 "claimed": false, 00:04:28.176 "zoned": false, 00:04:28.176 "supported_io_types": { 00:04:28.176 "read": true, 00:04:28.176 "write": true, 00:04:28.176 "unmap": true, 00:04:28.176 "flush": true, 00:04:28.176 "reset": true, 00:04:28.176 "nvme_admin": false, 00:04:28.176 "nvme_io": false, 00:04:28.176 "nvme_io_md": false, 00:04:28.176 "write_zeroes": true, 00:04:28.176 "zcopy": true, 00:04:28.176 "get_zone_info": false, 00:04:28.176 "zone_management": false, 00:04:28.176 "zone_append": false, 00:04:28.176 "compare": false, 00:04:28.176 "compare_and_write": false, 00:04:28.176 "abort": true, 00:04:28.176 "seek_hole": false, 00:04:28.176 "seek_data": false, 00:04:28.176 "copy": true, 00:04:28.176 "nvme_iov_md": false 00:04:28.176 }, 00:04:28.176 "memory_domains": [ 00:04:28.176 { 00:04:28.176 "dma_device_id": "system", 00:04:28.176 "dma_device_type": 1 00:04:28.176 }, 00:04:28.176 { 00:04:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.176 "dma_device_type": 2 00:04:28.176 } 00:04:28.176 ], 00:04:28.176 "driver_specific": {} 00:04:28.176 } 00:04:28.176 ]' 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.176 [2024-12-15 12:44:35.885607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.176 [2024-12-15 12:44:35.885632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.176 [2024-12-15 12:44:35.885646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a4af80 00:04:28.176 [2024-12-15 12:44:35.885652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.176 [2024-12-15 12:44:35.886622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.176 [2024-12-15 12:44:35.886642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.176 Passthru0 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.176 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.176 { 00:04:28.176 "name": "Malloc2", 00:04:28.176 "aliases": [ 00:04:28.176 "a42879b1-e07e-4bb2-8c26-40bd754a41cf" 00:04:28.176 ], 00:04:28.176 "product_name": "Malloc disk", 00:04:28.176 "block_size": 512, 00:04:28.176 "num_blocks": 16384, 00:04:28.176 "uuid": "a42879b1-e07e-4bb2-8c26-40bd754a41cf", 00:04:28.176 "assigned_rate_limits": { 00:04:28.176 "rw_ios_per_sec": 0, 00:04:28.176 "rw_mbytes_per_sec": 0, 00:04:28.176 "r_mbytes_per_sec": 0, 00:04:28.176 "w_mbytes_per_sec": 0 00:04:28.176 }, 00:04:28.176 "claimed": true, 00:04:28.176 "claim_type": "exclusive_write", 00:04:28.176 "zoned": false, 00:04:28.176 "supported_io_types": { 00:04:28.176 "read": true, 00:04:28.176 "write": true, 00:04:28.176 "unmap": true, 00:04:28.176 "flush": true, 00:04:28.176 "reset": true, 00:04:28.176 "nvme_admin": false, 00:04:28.176 "nvme_io": false, 00:04:28.176 "nvme_io_md": false, 00:04:28.176 "write_zeroes": true, 00:04:28.176 "zcopy": true, 00:04:28.176 "get_zone_info": false, 00:04:28.176 "zone_management": false, 00:04:28.176 "zone_append": false, 00:04:28.176 "compare": false, 00:04:28.177 "compare_and_write": false, 00:04:28.177 "abort": true, 00:04:28.177 "seek_hole": false, 00:04:28.177 "seek_data": false, 00:04:28.177 "copy": true, 00:04:28.177 "nvme_iov_md": false 00:04:28.177 }, 00:04:28.177 "memory_domains": [ 00:04:28.177 { 00:04:28.177 "dma_device_id": "system", 00:04:28.177 "dma_device_type": 1 00:04:28.177 }, 00:04:28.177 { 00:04:28.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.177 "dma_device_type": 2 00:04:28.177 } 00:04:28.177 ], 00:04:28.177 "driver_specific": {} 00:04:28.177 }, 00:04:28.177 { 00:04:28.177 "name": "Passthru0", 00:04:28.177 "aliases": [ 00:04:28.177 "3e4000b9-fd82-5eba-84d0-2e3314648150" 00:04:28.177 ], 00:04:28.177 "product_name": "passthru", 00:04:28.177 "block_size": 512, 00:04:28.177 "num_blocks": 16384, 00:04:28.177 "uuid": "3e4000b9-fd82-5eba-84d0-2e3314648150", 00:04:28.177 "assigned_rate_limits": { 00:04:28.177 "rw_ios_per_sec": 0, 00:04:28.177 "rw_mbytes_per_sec": 0, 00:04:28.177 "r_mbytes_per_sec": 0, 00:04:28.177 "w_mbytes_per_sec": 0 00:04:28.177 }, 00:04:28.177 "claimed": false, 00:04:28.177 "zoned": false, 00:04:28.177 "supported_io_types": { 00:04:28.177 "read": true, 00:04:28.177 "write": true, 00:04:28.177 "unmap": true, 00:04:28.177 "flush": true, 00:04:28.177 "reset": true, 00:04:28.177 "nvme_admin": false, 00:04:28.177 "nvme_io": false, 00:04:28.177 "nvme_io_md": false, 00:04:28.177 "write_zeroes": true, 00:04:28.177 "zcopy": true, 00:04:28.177 "get_zone_info": false, 00:04:28.177 "zone_management": false, 00:04:28.177 "zone_append": false, 00:04:28.177 "compare": false, 00:04:28.177 "compare_and_write": false, 00:04:28.177 "abort": true, 00:04:28.177 "seek_hole": false, 00:04:28.177 "seek_data": false, 00:04:28.177 "copy": true, 00:04:28.177 "nvme_iov_md": false 00:04:28.177 }, 00:04:28.177 "memory_domains": [ 00:04:28.177 { 00:04:28.177 "dma_device_id": "system", 00:04:28.177 "dma_device_type": 1 00:04:28.177 }, 00:04:28.177 { 00:04:28.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.177 "dma_device_type": 2 00:04:28.177 } 00:04:28.177 ], 00:04:28.177 "driver_specific": { 00:04:28.177 "passthru": { 00:04:28.177 "name": "Passthru0", 00:04:28.177 "base_bdev_name": "Malloc2" 00:04:28.177 } 00:04:28.177 } 00:04:28.177 } 00:04:28.177 ]' 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.177 12:44:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.177 12:44:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.177 00:04:28.177 real 0m0.282s 00:04:28.177 user 0m0.175s 00:04:28.177 sys 0m0.042s 00:04:28.177 12:44:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.177 12:44:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.177 ************************************ 00:04:28.177 END TEST rpc_daemon_integrity 00:04:28.177 ************************************ 00:04:28.177 12:44:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.177 12:44:36 rpc -- rpc/rpc.sh@84 -- # killprocess 772118 00:04:28.177 12:44:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 772118 ']' 00:04:28.177 12:44:36 rpc -- common/autotest_common.sh@958 -- # kill -0 772118 00:04:28.177 12:44:36 rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.177 12:44:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.177 12:44:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772118 00:04:28.436 12:44:36 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.436 12:44:36 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.436 12:44:36 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772118' 00:04:28.436 killing process with pid 772118 00:04:28.436 12:44:36 rpc -- common/autotest_common.sh@973 -- # kill 772118 00:04:28.436 12:44:36 rpc -- common/autotest_common.sh@978 -- # wait 772118 00:04:28.695 00:04:28.695 real 0m2.063s 00:04:28.695 user 0m2.639s 00:04:28.695 sys 0m0.690s 00:04:28.695 12:44:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.695 12:44:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.695 ************************************ 00:04:28.695 END TEST rpc 00:04:28.695 ************************************ 00:04:28.695 12:44:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.695 12:44:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.695 12:44:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.695 12:44:36 -- common/autotest_common.sh@10 -- # set +x 00:04:28.695 ************************************ 00:04:28.695 START TEST skip_rpc 00:04:28.695 ************************************ 00:04:28.695 12:44:36 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.695 * Looking for test storage... 00:04:28.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.695 12:44:36 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.695 12:44:36 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.695 12:44:36 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.955 12:44:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.955 --rc genhtml_branch_coverage=1 00:04:28.955 --rc genhtml_function_coverage=1 00:04:28.955 --rc genhtml_legend=1 00:04:28.955 --rc geninfo_all_blocks=1 00:04:28.955 --rc geninfo_unexecuted_blocks=1 00:04:28.955 00:04:28.955 ' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.955 --rc genhtml_branch_coverage=1 00:04:28.955 --rc genhtml_function_coverage=1 00:04:28.955 --rc genhtml_legend=1 00:04:28.955 --rc geninfo_all_blocks=1 00:04:28.955 --rc geninfo_unexecuted_blocks=1 00:04:28.955 00:04:28.955 ' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.955 --rc genhtml_branch_coverage=1 00:04:28.955 --rc genhtml_function_coverage=1 00:04:28.955 --rc genhtml_legend=1 00:04:28.955 --rc geninfo_all_blocks=1 00:04:28.955 --rc geninfo_unexecuted_blocks=1 00:04:28.955 00:04:28.955 ' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.955 --rc genhtml_branch_coverage=1 00:04:28.955 --rc genhtml_function_coverage=1 00:04:28.955 --rc genhtml_legend=1 00:04:28.955 --rc geninfo_all_blocks=1 00:04:28.955 --rc geninfo_unexecuted_blocks=1 00:04:28.955 00:04:28.955 ' 00:04:28.955 12:44:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.955 12:44:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.955 12:44:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.955 12:44:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.955 ************************************ 00:04:28.955 START TEST skip_rpc 00:04:28.955 ************************************ 00:04:28.955 12:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:28.955 12:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=772739 00:04:28.955 12:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.955 12:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:28.955 12:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:28.955 [2024-12-15 12:44:36.744440] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:28.955 [2024-12-15 12:44:36.744479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772739 ] 00:04:28.955 [2024-12-15 12:44:36.819450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.955 [2024-12-15 12:44:36.841347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 772739 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 772739 ']' 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 772739 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772739 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772739' 00:04:34.234 killing process with pid 772739 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 772739 00:04:34.234 12:44:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 772739 00:04:34.234 00:04:34.234 real 0m5.362s 00:04:34.234 user 0m5.120s 00:04:34.234 sys 0m0.276s 00:04:34.234 12:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.234 12:44:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.234 ************************************ 00:04:34.234 END TEST skip_rpc 00:04:34.234 ************************************ 00:04:34.234 12:44:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.234 12:44:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.234 12:44:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.234 12:44:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.234 ************************************ 00:04:34.234 START TEST skip_rpc_with_json 00:04:34.234 ************************************ 00:04:34.234 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:34.234 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.234 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=773661 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 773661 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 773661 ']' 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.235 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.494 [2024-12-15 12:44:42.181021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:34.494 [2024-12-15 12:44:42.181065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773661 ] 00:04:34.494 [2024-12-15 12:44:42.255034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.494 [2024-12-15 12:44:42.275133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.753 [2024-12-15 12:44:42.484288] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.753 request: 00:04:34.753 { 00:04:34.753 "trtype": "tcp", 00:04:34.753 "method": "nvmf_get_transports", 00:04:34.753 "req_id": 1 00:04:34.753 } 00:04:34.753 Got JSON-RPC error response 00:04:34.753 response: 00:04:34.753 { 00:04:34.753 "code": -19, 00:04:34.753 "message": "No such device" 00:04:34.753 } 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.753 [2024-12-15 12:44:42.496396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.753 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.012 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.012 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.012 { 00:04:35.012 "subsystems": [ 00:04:35.012 { 00:04:35.012 "subsystem": "fsdev", 00:04:35.012 "config": [ 00:04:35.012 { 00:04:35.012 "method": "fsdev_set_opts", 00:04:35.012 "params": { 00:04:35.012 "fsdev_io_pool_size": 65535, 00:04:35.012 "fsdev_io_cache_size": 256 00:04:35.012 } 00:04:35.012 } 00:04:35.012 ] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "vfio_user_target", 00:04:35.012 "config": null 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "keyring", 00:04:35.012 "config": [] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "iobuf", 00:04:35.012 "config": [ 00:04:35.012 { 00:04:35.012 "method": "iobuf_set_options", 00:04:35.012 "params": { 00:04:35.012 "small_pool_count": 8192, 00:04:35.012 "large_pool_count": 1024, 00:04:35.012 "small_bufsize": 8192, 00:04:35.012 "large_bufsize": 135168, 00:04:35.012 "enable_numa": false 00:04:35.012 } 00:04:35.012 } 00:04:35.012 ] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "sock", 00:04:35.012 "config": [ 00:04:35.012 { 00:04:35.012 "method": "sock_set_default_impl", 00:04:35.012 "params": { 00:04:35.012 "impl_name": "posix" 00:04:35.012 } 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "method": "sock_impl_set_options", 00:04:35.012 "params": { 00:04:35.012 "impl_name": "ssl", 00:04:35.012 "recv_buf_size": 4096, 00:04:35.012 "send_buf_size": 4096, 00:04:35.012 "enable_recv_pipe": true, 00:04:35.012 "enable_quickack": false, 00:04:35.012 "enable_placement_id": 0, 00:04:35.012 "enable_zerocopy_send_server": true, 00:04:35.012 "enable_zerocopy_send_client": false, 00:04:35.012 "zerocopy_threshold": 0, 00:04:35.012 "tls_version": 0, 00:04:35.012 "enable_ktls": false 00:04:35.012 } 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "method": "sock_impl_set_options", 00:04:35.012 "params": { 00:04:35.012 "impl_name": "posix", 00:04:35.012 "recv_buf_size": 2097152, 00:04:35.012 "send_buf_size": 2097152, 00:04:35.012 "enable_recv_pipe": true, 00:04:35.012 "enable_quickack": false, 00:04:35.012 "enable_placement_id": 0, 00:04:35.012 "enable_zerocopy_send_server": true, 00:04:35.012 "enable_zerocopy_send_client": false, 00:04:35.012 "zerocopy_threshold": 0, 00:04:35.012 "tls_version": 0, 00:04:35.012 "enable_ktls": false 00:04:35.012 } 00:04:35.012 } 00:04:35.012 ] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "vmd", 00:04:35.012 "config": [] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "accel", 00:04:35.012 "config": [ 00:04:35.012 { 00:04:35.012 "method": "accel_set_options", 00:04:35.012 "params": { 00:04:35.012 "small_cache_size": 128, 00:04:35.012 "large_cache_size": 16, 00:04:35.012 "task_count": 2048, 00:04:35.012 "sequence_count": 2048, 00:04:35.012 "buf_count": 2048 00:04:35.012 } 00:04:35.012 } 00:04:35.012 ] 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "subsystem": "bdev", 00:04:35.012 "config": [ 00:04:35.012 { 00:04:35.012 "method": "bdev_set_options", 00:04:35.012 "params": { 00:04:35.012 "bdev_io_pool_size": 65535, 00:04:35.012 "bdev_io_cache_size": 256, 00:04:35.012 "bdev_auto_examine": true, 00:04:35.012 "iobuf_small_cache_size": 128, 00:04:35.012 "iobuf_large_cache_size": 16 00:04:35.012 } 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "method": "bdev_raid_set_options", 00:04:35.012 "params": { 00:04:35.012 "process_window_size_kb": 1024, 00:04:35.012 "process_max_bandwidth_mb_sec": 0 00:04:35.012 } 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "method": "bdev_iscsi_set_options", 00:04:35.012 "params": { 00:04:35.012 "timeout_sec": 30 00:04:35.012 } 00:04:35.012 }, 00:04:35.012 { 00:04:35.012 "method": "bdev_nvme_set_options", 00:04:35.012 "params": { 00:04:35.012 "action_on_timeout": "none", 00:04:35.012 "timeout_us": 0, 00:04:35.012 "timeout_admin_us": 0, 00:04:35.012 "keep_alive_timeout_ms": 10000, 00:04:35.012 "arbitration_burst": 0, 00:04:35.012 "low_priority_weight": 0, 00:04:35.012 "medium_priority_weight": 0, 00:04:35.012 "high_priority_weight": 0, 00:04:35.012 "nvme_adminq_poll_period_us": 10000, 00:04:35.013 "nvme_ioq_poll_period_us": 0, 00:04:35.013 "io_queue_requests": 0, 00:04:35.013 "delay_cmd_submit": true, 00:04:35.013 "transport_retry_count": 4, 00:04:35.013 "bdev_retry_count": 3, 00:04:35.013 "transport_ack_timeout": 0, 00:04:35.013 "ctrlr_loss_timeout_sec": 0, 00:04:35.013 "reconnect_delay_sec": 0, 00:04:35.013 "fast_io_fail_timeout_sec": 0, 00:04:35.013 "disable_auto_failback": false, 00:04:35.013 "generate_uuids": false, 00:04:35.013 "transport_tos": 0, 00:04:35.013 "nvme_error_stat": false, 00:04:35.013 "rdma_srq_size": 0, 00:04:35.013 "io_path_stat": false, 00:04:35.013 "allow_accel_sequence": false, 00:04:35.013 "rdma_max_cq_size": 0, 00:04:35.013 "rdma_cm_event_timeout_ms": 0, 00:04:35.013 "dhchap_digests": [ 00:04:35.013 "sha256", 00:04:35.013 "sha384", 00:04:35.013 "sha512" 00:04:35.013 ], 00:04:35.013 "dhchap_dhgroups": [ 00:04:35.013 "null", 00:04:35.013 "ffdhe2048", 00:04:35.013 "ffdhe3072", 00:04:35.013 "ffdhe4096", 00:04:35.013 "ffdhe6144", 00:04:35.013 "ffdhe8192" 00:04:35.013 ], 00:04:35.013 "rdma_umr_per_io": false 00:04:35.013 } 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "method": "bdev_nvme_set_hotplug", 00:04:35.013 "params": { 00:04:35.013 "period_us": 100000, 00:04:35.013 "enable": false 00:04:35.013 } 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "method": "bdev_wait_for_examine" 00:04:35.013 } 00:04:35.013 ] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "scsi", 00:04:35.013 "config": null 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "scheduler", 00:04:35.013 "config": [ 00:04:35.013 { 00:04:35.013 "method": "framework_set_scheduler", 00:04:35.013 "params": { 00:04:35.013 "name": "static" 00:04:35.013 } 00:04:35.013 } 00:04:35.013 ] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "vhost_scsi", 00:04:35.013 "config": [] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "vhost_blk", 00:04:35.013 "config": [] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "ublk", 00:04:35.013 "config": [] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "nbd", 00:04:35.013 "config": [] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "nvmf", 00:04:35.013 "config": [ 00:04:35.013 { 00:04:35.013 "method": "nvmf_set_config", 00:04:35.013 "params": { 00:04:35.013 "discovery_filter": "match_any", 00:04:35.013 "admin_cmd_passthru": { 00:04:35.013 "identify_ctrlr": false 00:04:35.013 }, 00:04:35.013 "dhchap_digests": [ 00:04:35.013 "sha256", 00:04:35.013 "sha384", 00:04:35.013 "sha512" 00:04:35.013 ], 00:04:35.013 "dhchap_dhgroups": [ 00:04:35.013 "null", 00:04:35.013 "ffdhe2048", 00:04:35.013 "ffdhe3072", 00:04:35.013 "ffdhe4096", 00:04:35.013 "ffdhe6144", 00:04:35.013 "ffdhe8192" 00:04:35.013 ] 00:04:35.013 } 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "method": "nvmf_set_max_subsystems", 00:04:35.013 "params": { 00:04:35.013 "max_subsystems": 1024 00:04:35.013 } 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "method": "nvmf_set_crdt", 00:04:35.013 "params": { 00:04:35.013 "crdt1": 0, 00:04:35.013 "crdt2": 0, 00:04:35.013 "crdt3": 0 00:04:35.013 } 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "method": "nvmf_create_transport", 00:04:35.013 "params": { 00:04:35.013 "trtype": "TCP", 00:04:35.013 "max_queue_depth": 128, 00:04:35.013 "max_io_qpairs_per_ctrlr": 127, 00:04:35.013 "in_capsule_data_size": 4096, 00:04:35.013 "max_io_size": 131072, 00:04:35.013 "io_unit_size": 131072, 00:04:35.013 "max_aq_depth": 128, 00:04:35.013 "num_shared_buffers": 511, 00:04:35.013 "buf_cache_size": 4294967295, 00:04:35.013 "dif_insert_or_strip": false, 00:04:35.013 "zcopy": false, 00:04:35.013 "c2h_success": true, 00:04:35.013 "sock_priority": 0, 00:04:35.013 "abort_timeout_sec": 1, 00:04:35.013 "ack_timeout": 0, 00:04:35.013 "data_wr_pool_size": 0 00:04:35.013 } 00:04:35.013 } 00:04:35.013 ] 00:04:35.013 }, 00:04:35.013 { 00:04:35.013 "subsystem": "iscsi", 00:04:35.013 "config": [ 00:04:35.013 { 00:04:35.013 "method": "iscsi_set_options", 00:04:35.013 "params": { 00:04:35.013 "node_base": "iqn.2016-06.io.spdk", 00:04:35.013 "max_sessions": 128, 00:04:35.013 "max_connections_per_session": 2, 00:04:35.013 "max_queue_depth": 64, 00:04:35.013 "default_time2wait": 2, 00:04:35.013 "default_time2retain": 20, 00:04:35.013 "first_burst_length": 8192, 00:04:35.013 "immediate_data": true, 00:04:35.013 "allow_duplicated_isid": false, 00:04:35.013 "error_recovery_level": 0, 00:04:35.013 "nop_timeout": 60, 00:04:35.013 "nop_in_interval": 30, 00:04:35.013 "disable_chap": false, 00:04:35.013 "require_chap": false, 00:04:35.013 "mutual_chap": false, 00:04:35.013 "chap_group": 0, 00:04:35.013 "max_large_datain_per_connection": 64, 00:04:35.013 "max_r2t_per_connection": 4, 00:04:35.013 "pdu_pool_size": 36864, 00:04:35.013 "immediate_data_pool_size": 16384, 00:04:35.013 "data_out_pool_size": 2048 00:04:35.013 } 00:04:35.013 } 00:04:35.013 ] 00:04:35.013 } 00:04:35.013 ] 00:04:35.013 } 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 773661 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773661 ']' 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773661 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773661 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773661' 00:04:35.013 killing process with pid 773661 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773661 00:04:35.013 12:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773661 00:04:35.272 12:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=773764 00:04:35.273 12:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.273 12:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773764 ']' 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773764' 00:04:40.729 killing process with pid 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773764 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.729 00:04:40.729 real 0m6.247s 00:04:40.729 user 0m5.947s 00:04:40.729 sys 0m0.606s 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.729 ************************************ 00:04:40.729 END TEST skip_rpc_with_json 00:04:40.729 ************************************ 00:04:40.729 12:44:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.729 ************************************ 00:04:40.729 START TEST skip_rpc_with_delay 00:04:40.729 ************************************ 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.729 [2024-12-15 12:44:48.498875] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.729 00:04:40.729 real 0m0.069s 00:04:40.729 user 0m0.044s 00:04:40.729 sys 0m0.024s 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.729 12:44:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.729 ************************************ 00:04:40.729 END TEST skip_rpc_with_delay 00:04:40.729 ************************************ 00:04:40.729 12:44:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.729 12:44:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.729 12:44:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.729 12:44:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.729 ************************************ 00:04:40.729 START TEST exit_on_failed_rpc_init 00:04:40.729 ************************************ 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=774805 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 774805 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 774805 ']' 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.730 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.730 [2024-12-15 12:44:48.630994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:40.730 [2024-12-15 12:44:48.631045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774805 ] 00:04:40.988 [2024-12-15 12:44:48.708392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.988 [2024-12-15 12:44:48.731116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.248 12:44:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.248 [2024-12-15 12:44:48.983035] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:41.248 [2024-12-15 12:44:48.983089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774857 ] 00:04:41.248 [2024-12-15 12:44:49.057371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.248 [2024-12-15 12:44:49.079353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.248 [2024-12-15 12:44:49.079406] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.248 [2024-12-15 12:44:49.079415] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.248 [2024-12-15 12:44:49.079420] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 774805 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 774805 ']' 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 774805 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.248 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774805 00:04:41.507 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.507 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.507 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774805' 00:04:41.507 killing process with pid 774805 00:04:41.507 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 774805 00:04:41.507 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 774805 00:04:41.765 00:04:41.765 real 0m0.884s 00:04:41.765 user 0m0.916s 00:04:41.765 sys 0m0.384s 00:04:41.765 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.765 12:44:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.765 ************************************ 00:04:41.765 END TEST exit_on_failed_rpc_init 00:04:41.765 ************************************ 00:04:41.765 12:44:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.765 00:04:41.765 real 0m13.024s 00:04:41.765 user 0m12.248s 00:04:41.765 sys 0m1.563s 00:04:41.765 12:44:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.765 12:44:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.765 ************************************ 00:04:41.765 END TEST skip_rpc 00:04:41.765 ************************************ 00:04:41.765 12:44:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.765 12:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.765 12:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.765 12:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:41.765 ************************************ 00:04:41.765 START TEST rpc_client 00:04:41.765 ************************************ 00:04:41.765 12:44:49 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.765 * Looking for test storage... 00:04:41.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:41.765 12:44:49 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.765 12:44:49 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.765 12:44:49 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.025 12:44:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.025 --rc genhtml_branch_coverage=1 00:04:42.025 --rc genhtml_function_coverage=1 00:04:42.025 --rc genhtml_legend=1 00:04:42.025 --rc geninfo_all_blocks=1 00:04:42.025 --rc geninfo_unexecuted_blocks=1 00:04:42.025 00:04:42.025 ' 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.025 --rc genhtml_branch_coverage=1 00:04:42.025 --rc genhtml_function_coverage=1 00:04:42.025 --rc genhtml_legend=1 00:04:42.025 --rc geninfo_all_blocks=1 00:04:42.025 --rc geninfo_unexecuted_blocks=1 00:04:42.025 00:04:42.025 ' 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.025 --rc genhtml_branch_coverage=1 00:04:42.025 --rc genhtml_function_coverage=1 00:04:42.025 --rc genhtml_legend=1 00:04:42.025 --rc geninfo_all_blocks=1 00:04:42.025 --rc geninfo_unexecuted_blocks=1 00:04:42.025 00:04:42.025 ' 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.025 --rc genhtml_branch_coverage=1 00:04:42.025 --rc genhtml_function_coverage=1 00:04:42.025 --rc genhtml_legend=1 00:04:42.025 --rc geninfo_all_blocks=1 00:04:42.025 --rc geninfo_unexecuted_blocks=1 00:04:42.025 00:04:42.025 ' 00:04:42.025 12:44:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:42.025 OK 00:04:42.025 12:44:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.025 00:04:42.025 real 0m0.196s 00:04:42.025 user 0m0.135s 00:04:42.025 sys 0m0.076s 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.025 12:44:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.025 ************************************ 00:04:42.025 END TEST rpc_client 00:04:42.025 ************************************ 00:04:42.025 12:44:49 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.025 12:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.025 12:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.025 12:44:49 -- common/autotest_common.sh@10 -- # set +x 00:04:42.025 ************************************ 00:04:42.025 START TEST json_config 00:04:42.025 ************************************ 00:04:42.025 12:44:49 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.025 12:44:49 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.025 12:44:49 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.025 12:44:49 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.285 12:44:49 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.285 12:44:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.285 12:44:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.285 12:44:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.285 12:44:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.285 12:44:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.285 12:44:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.285 12:44:49 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.285 12:44:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.285 12:44:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.285 12:44:49 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.285 12:44:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@353 -- # local d=2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.285 12:44:49 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.285 12:44:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.285 12:44:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.285 12:44:49 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.285 12:44:49 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.285 12:44:49 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.285 --rc genhtml_branch_coverage=1 00:04:42.285 --rc genhtml_function_coverage=1 00:04:42.285 --rc genhtml_legend=1 00:04:42.285 --rc geninfo_all_blocks=1 00:04:42.285 --rc geninfo_unexecuted_blocks=1 00:04:42.285 00:04:42.285 ' 00:04:42.285 12:44:49 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.286 --rc genhtml_branch_coverage=1 00:04:42.286 --rc genhtml_function_coverage=1 00:04:42.286 --rc genhtml_legend=1 00:04:42.286 --rc geninfo_all_blocks=1 00:04:42.286 --rc geninfo_unexecuted_blocks=1 00:04:42.286 00:04:42.286 ' 00:04:42.286 12:44:49 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.286 --rc genhtml_branch_coverage=1 00:04:42.286 --rc genhtml_function_coverage=1 00:04:42.286 --rc genhtml_legend=1 00:04:42.286 --rc geninfo_all_blocks=1 00:04:42.286 --rc geninfo_unexecuted_blocks=1 00:04:42.286 00:04:42.286 ' 00:04:42.286 12:44:49 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.286 --rc genhtml_branch_coverage=1 00:04:42.286 --rc genhtml_function_coverage=1 00:04:42.286 --rc genhtml_legend=1 00:04:42.286 --rc geninfo_all_blocks=1 00:04:42.286 --rc geninfo_unexecuted_blocks=1 00:04:42.286 00:04:42.286 ' 00:04:42.286 12:44:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.286 12:44:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.286 12:44:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.286 12:44:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.286 12:44:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.286 12:44:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.286 12:44:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.286 12:44:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.286 12:44:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.286 12:44:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.286 12:44:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.286 12:44:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:42.286 INFO: JSON configuration test init 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.286 12:44:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.286 12:44:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.286 12:44:50 json_config -- json_config/common.sh@10 -- # shift 00:04:42.286 12:44:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.286 12:44:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.286 12:44:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.286 12:44:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.286 12:44:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.286 12:44:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775205 00:04:42.286 12:44:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.286 Waiting for target to run... 00:04:42.286 12:44:50 json_config -- json_config/common.sh@25 -- # waitforlisten 775205 /var/tmp/spdk_tgt.sock 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 775205 ']' 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.286 12:44:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.286 12:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.286 [2024-12-15 12:44:50.096352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:42.286 [2024-12-15 12:44:50.096426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775205 ] 00:04:42.855 [2024-12-15 12:44:50.554623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.855 [2024-12-15 12:44:50.576192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:43.113 12:44:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.113 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.113 12:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.113 12:44:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:43.113 12:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.396 12:44:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.396 12:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:46.396 12:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@54 -- # sort 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:46.396 12:44:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:46.396 12:44:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.397 12:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:46.397 12:44:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.397 12:44:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:46.397 12:44:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.397 12:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.655 MallocForNvmf0 00:04:46.655 12:44:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.655 12:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.913 MallocForNvmf1 00:04:46.913 12:44:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.913 12:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.172 [2024-12-15 12:44:54.846729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.172 12:44:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.172 12:44:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.172 12:44:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.172 12:44:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.438 12:44:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.438 12:44:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.699 12:44:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.699 12:44:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.957 [2024-12-15 12:44:55.645150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.957 12:44:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:47.957 12:44:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.957 12:44:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.957 12:44:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:47.957 12:44:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.957 12:44:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.957 12:44:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:47.957 12:44:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.957 12:44:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.215 MallocBdevForConfigChangeCheck 00:04:48.215 12:44:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:48.215 12:44:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.215 12:44:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.215 12:44:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:48.215 12:44:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.473 12:44:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:48.473 INFO: shutting down applications... 00:04:48.473 12:44:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:48.473 12:44:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:48.473 12:44:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:48.473 12:44:56 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:50.384 Calling clear_iscsi_subsystem 00:04:50.384 Calling clear_nvmf_subsystem 00:04:50.384 Calling clear_nbd_subsystem 00:04:50.384 Calling clear_ublk_subsystem 00:04:50.384 Calling clear_vhost_blk_subsystem 00:04:50.384 Calling clear_vhost_scsi_subsystem 00:04:50.384 Calling clear_bdev_subsystem 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:50.384 12:44:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:50.384 12:44:58 json_config -- json_config/json_config.sh@352 -- # break 00:04:50.384 12:44:58 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:50.384 12:44:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:50.384 12:44:58 json_config -- json_config/common.sh@31 -- # local app=target 00:04:50.384 12:44:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.384 12:44:58 json_config -- json_config/common.sh@35 -- # [[ -n 775205 ]] 00:04:50.384 12:44:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 775205 00:04:50.384 12:44:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.384 12:44:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.384 12:44:58 json_config -- json_config/common.sh@41 -- # kill -0 775205 00:04:50.384 12:44:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.951 12:44:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.951 12:44:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.951 12:44:58 json_config -- json_config/common.sh@41 -- # kill -0 775205 00:04:50.951 12:44:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.951 12:44:58 json_config -- json_config/common.sh@43 -- # break 00:04:50.951 12:44:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.951 12:44:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.951 SPDK target shutdown done 00:04:50.951 12:44:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:50.951 INFO: relaunching applications... 00:04:50.951 12:44:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.951 12:44:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.951 12:44:58 json_config -- json_config/common.sh@10 -- # shift 00:04:50.951 12:44:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.951 12:44:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.951 12:44:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.951 12:44:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.951 12:44:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.951 12:44:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776692 00:04:50.951 12:44:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.951 Waiting for target to run... 00:04:50.951 12:44:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.952 12:44:58 json_config -- json_config/common.sh@25 -- # waitforlisten 776692 /var/tmp/spdk_tgt.sock 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 776692 ']' 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.952 12:44:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.952 [2024-12-15 12:44:58.838559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:50.952 [2024-12-15 12:44:58.838621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776692 ] 00:04:51.518 [2024-12-15 12:44:59.300107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.518 [2024-12-15 12:44:59.319672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.801 [2024-12-15 12:45:02.324750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.801 [2024-12-15 12:45:02.357033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.369 12:45:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.369 12:45:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:55.369 12:45:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.369 00:04:55.369 12:45:03 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:55.369 12:45:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.369 INFO: Checking if target configuration is the same... 00:04:55.369 12:45:03 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:55.369 12:45:03 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.369 12:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.369 + '[' 2 -ne 2 ']' 00:04:55.369 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.369 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.369 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.369 +++ basename /dev/fd/62 00:04:55.369 ++ mktemp /tmp/62.XXX 00:04:55.369 + tmp_file_1=/tmp/62.LF0 00:04:55.369 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.369 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.369 + tmp_file_2=/tmp/spdk_tgt_config.json.rwE 00:04:55.369 + ret=0 00:04:55.369 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.628 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.628 + diff -u /tmp/62.LF0 /tmp/spdk_tgt_config.json.rwE 00:04:55.628 + echo 'INFO: JSON config files are the same' 00:04:55.628 INFO: JSON config files are the same 00:04:55.628 + rm /tmp/62.LF0 /tmp/spdk_tgt_config.json.rwE 00:04:55.628 + exit 0 00:04:55.628 12:45:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:55.628 12:45:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.628 INFO: changing configuration and checking if this can be detected... 00:04:55.628 12:45:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.628 12:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.886 12:45:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:55.886 12:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.886 12:45:03 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.886 + '[' 2 -ne 2 ']' 00:04:55.886 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.886 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.886 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.886 +++ basename /dev/fd/62 00:04:55.886 ++ mktemp /tmp/62.XXX 00:04:55.886 + tmp_file_1=/tmp/62.6ty 00:04:55.886 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.886 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.886 + tmp_file_2=/tmp/spdk_tgt_config.json.BUY 00:04:55.886 + ret=0 00:04:55.886 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.145 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.403 + diff -u /tmp/62.6ty /tmp/spdk_tgt_config.json.BUY 00:04:56.403 + ret=1 00:04:56.403 + echo '=== Start of file: /tmp/62.6ty ===' 00:04:56.403 + cat /tmp/62.6ty 00:04:56.403 + echo '=== End of file: /tmp/62.6ty ===' 00:04:56.403 + echo '' 00:04:56.403 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BUY ===' 00:04:56.403 + cat /tmp/spdk_tgt_config.json.BUY 00:04:56.403 + echo '=== End of file: /tmp/spdk_tgt_config.json.BUY ===' 00:04:56.403 + echo '' 00:04:56.403 + rm /tmp/62.6ty /tmp/spdk_tgt_config.json.BUY 00:04:56.403 + exit 1 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:56.403 INFO: configuration change detected. 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 776692 ]] 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.403 12:45:04 json_config -- json_config/json_config.sh@330 -- # killprocess 776692 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@954 -- # '[' -z 776692 ']' 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@958 -- # kill -0 776692 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@959 -- # uname 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776692 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776692' 00:04:56.403 killing process with pid 776692 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@973 -- # kill 776692 00:04:56.403 12:45:04 json_config -- common/autotest_common.sh@978 -- # wait 776692 00:04:57.778 12:45:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.778 12:45:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:57.778 12:45:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.778 12:45:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.778 12:45:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:57.778 12:45:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:57.778 INFO: Success 00:04:57.778 00:04:57.778 real 0m15.837s 00:04:57.778 user 0m16.968s 00:04:57.778 sys 0m2.115s 00:04:57.778 12:45:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.778 12:45:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.778 ************************************ 00:04:57.778 END TEST json_config 00:04:57.778 ************************************ 00:04:58.038 12:45:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.038 12:45:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.038 12:45:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.038 12:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.038 ************************************ 00:04:58.038 START TEST json_config_extra_key 00:04:58.038 ************************************ 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.038 --rc genhtml_branch_coverage=1 00:04:58.038 --rc genhtml_function_coverage=1 00:04:58.038 --rc genhtml_legend=1 00:04:58.038 --rc geninfo_all_blocks=1 00:04:58.038 --rc geninfo_unexecuted_blocks=1 00:04:58.038 00:04:58.038 ' 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.038 --rc genhtml_branch_coverage=1 00:04:58.038 --rc genhtml_function_coverage=1 00:04:58.038 --rc genhtml_legend=1 00:04:58.038 --rc geninfo_all_blocks=1 00:04:58.038 --rc geninfo_unexecuted_blocks=1 00:04:58.038 00:04:58.038 ' 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.038 --rc genhtml_branch_coverage=1 00:04:58.038 --rc genhtml_function_coverage=1 00:04:58.038 --rc genhtml_legend=1 00:04:58.038 --rc geninfo_all_blocks=1 00:04:58.038 --rc geninfo_unexecuted_blocks=1 00:04:58.038 00:04:58.038 ' 00:04:58.038 12:45:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.038 --rc genhtml_branch_coverage=1 00:04:58.038 --rc genhtml_function_coverage=1 00:04:58.038 --rc genhtml_legend=1 00:04:58.038 --rc geninfo_all_blocks=1 00:04:58.038 --rc geninfo_unexecuted_blocks=1 00:04:58.038 00:04:58.038 ' 00:04:58.038 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.038 12:45:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.038 12:45:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.039 12:45:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.039 12:45:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.039 12:45:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.039 12:45:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.039 12:45:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.039 12:45:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.039 INFO: launching applications... 00:04:58.039 12:45:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=778072 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.039 Waiting for target to run... 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 778072 /var/tmp/spdk_tgt.sock 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 778072 ']' 00:04:58.039 12:45:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.039 12:45:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.298 [2024-12-15 12:45:05.982506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:58.298 [2024-12-15 12:45:05.982555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778072 ] 00:04:58.556 [2024-12-15 12:45:06.274562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.556 [2024-12-15 12:45:06.287639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.125 12:45:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.125 12:45:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.125 00:04:59.125 12:45:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.125 INFO: shutting down applications... 00:04:59.125 12:45:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 778072 ]] 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 778072 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778072 00:04:59.125 12:45:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778072 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.691 12:45:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.691 SPDK target shutdown done 00:04:59.691 12:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.691 Success 00:04:59.691 00:04:59.691 real 0m1.576s 00:04:59.691 user 0m1.343s 00:04:59.691 sys 0m0.396s 00:04:59.691 12:45:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.691 12:45:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.691 ************************************ 00:04:59.691 END TEST json_config_extra_key 00:04:59.691 ************************************ 00:04:59.691 12:45:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.691 12:45:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.691 12:45:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.691 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:59.691 ************************************ 00:04:59.691 START TEST alias_rpc 00:04:59.691 ************************************ 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.691 * Looking for test storage... 00:04:59.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.691 12:45:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.691 --rc genhtml_branch_coverage=1 00:04:59.691 --rc genhtml_function_coverage=1 00:04:59.691 --rc genhtml_legend=1 00:04:59.691 --rc geninfo_all_blocks=1 00:04:59.691 --rc geninfo_unexecuted_blocks=1 00:04:59.691 00:04:59.691 ' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.691 --rc genhtml_branch_coverage=1 00:04:59.691 --rc genhtml_function_coverage=1 00:04:59.691 --rc genhtml_legend=1 00:04:59.691 --rc geninfo_all_blocks=1 00:04:59.691 --rc geninfo_unexecuted_blocks=1 00:04:59.691 00:04:59.691 ' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.691 --rc genhtml_branch_coverage=1 00:04:59.691 --rc genhtml_function_coverage=1 00:04:59.691 --rc genhtml_legend=1 00:04:59.691 --rc geninfo_all_blocks=1 00:04:59.691 --rc geninfo_unexecuted_blocks=1 00:04:59.691 00:04:59.691 ' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.691 --rc genhtml_branch_coverage=1 00:04:59.691 --rc genhtml_function_coverage=1 00:04:59.691 --rc genhtml_legend=1 00:04:59.691 --rc geninfo_all_blocks=1 00:04:59.691 --rc geninfo_unexecuted_blocks=1 00:04:59.691 00:04:59.691 ' 00:04:59.691 12:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.691 12:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=778816 00:04:59.691 12:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 778816 00:04:59.691 12:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 778816 ']' 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.691 12:45:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.950 [2024-12-15 12:45:07.623304] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:59.950 [2024-12-15 12:45:07.623356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778816 ] 00:04:59.950 [2024-12-15 12:45:07.698749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.950 [2024-12-15 12:45:07.721979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.208 12:45:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.208 12:45:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.208 12:45:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.467 12:45:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 778816 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 778816 ']' 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 778816 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778816 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778816' 00:05:00.467 killing process with pid 778816 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 778816 00:05:00.467 12:45:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 778816 00:05:00.726 00:05:00.726 real 0m1.093s 00:05:00.726 user 0m1.104s 00:05:00.726 sys 0m0.421s 00:05:00.726 12:45:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.726 12:45:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.726 ************************************ 00:05:00.726 END TEST alias_rpc 00:05:00.726 ************************************ 00:05:00.726 12:45:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.726 12:45:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.726 12:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.726 12:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.726 12:45:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.726 ************************************ 00:05:00.726 START TEST spdkcli_tcp 00:05:00.726 ************************************ 00:05:00.726 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.726 * Looking for test storage... 00:05:00.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.985 12:45:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.985 --rc genhtml_branch_coverage=1 00:05:00.985 --rc genhtml_function_coverage=1 00:05:00.985 --rc genhtml_legend=1 00:05:00.985 --rc geninfo_all_blocks=1 00:05:00.985 --rc geninfo_unexecuted_blocks=1 00:05:00.985 00:05:00.985 ' 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.985 --rc genhtml_branch_coverage=1 00:05:00.985 --rc genhtml_function_coverage=1 00:05:00.985 --rc genhtml_legend=1 00:05:00.985 --rc geninfo_all_blocks=1 00:05:00.985 --rc geninfo_unexecuted_blocks=1 00:05:00.985 00:05:00.985 ' 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.985 --rc genhtml_branch_coverage=1 00:05:00.985 --rc genhtml_function_coverage=1 00:05:00.985 --rc genhtml_legend=1 00:05:00.985 --rc geninfo_all_blocks=1 00:05:00.985 --rc geninfo_unexecuted_blocks=1 00:05:00.985 00:05:00.985 ' 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.985 --rc genhtml_branch_coverage=1 00:05:00.985 --rc genhtml_function_coverage=1 00:05:00.985 --rc genhtml_legend=1 00:05:00.985 --rc geninfo_all_blocks=1 00:05:00.985 --rc geninfo_unexecuted_blocks=1 00:05:00.985 00:05:00.985 ' 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.985 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.985 12:45:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.986 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=779069 00:05:00.986 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 779069 00:05:00.986 12:45:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 779069 ']' 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.986 12:45:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.986 [2024-12-15 12:45:08.786163] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:00.986 [2024-12-15 12:45:08.786215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779069 ] 00:05:00.986 [2024-12-15 12:45:08.863300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.986 [2024-12-15 12:45:08.887054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.986 [2024-12-15 12:45:08.887054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.245 12:45:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.245 12:45:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.245 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=779237 00:05:01.245 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.245 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.504 [ 00:05:01.504 "bdev_malloc_delete", 00:05:01.504 "bdev_malloc_create", 00:05:01.504 "bdev_null_resize", 00:05:01.504 "bdev_null_delete", 00:05:01.504 "bdev_null_create", 00:05:01.504 "bdev_nvme_cuse_unregister", 00:05:01.504 "bdev_nvme_cuse_register", 00:05:01.504 "bdev_opal_new_user", 00:05:01.504 "bdev_opal_set_lock_state", 00:05:01.504 "bdev_opal_delete", 00:05:01.504 "bdev_opal_get_info", 00:05:01.504 "bdev_opal_create", 00:05:01.504 "bdev_nvme_opal_revert", 00:05:01.504 "bdev_nvme_opal_init", 00:05:01.504 "bdev_nvme_send_cmd", 00:05:01.504 "bdev_nvme_set_keys", 00:05:01.504 "bdev_nvme_get_path_iostat", 00:05:01.504 "bdev_nvme_get_mdns_discovery_info", 00:05:01.504 "bdev_nvme_stop_mdns_discovery", 00:05:01.504 "bdev_nvme_start_mdns_discovery", 00:05:01.504 "bdev_nvme_set_multipath_policy", 00:05:01.504 "bdev_nvme_set_preferred_path", 00:05:01.504 "bdev_nvme_get_io_paths", 00:05:01.504 "bdev_nvme_remove_error_injection", 00:05:01.504 "bdev_nvme_add_error_injection", 00:05:01.504 "bdev_nvme_get_discovery_info", 00:05:01.504 "bdev_nvme_stop_discovery", 00:05:01.504 "bdev_nvme_start_discovery", 00:05:01.504 "bdev_nvme_get_controller_health_info", 00:05:01.504 "bdev_nvme_disable_controller", 00:05:01.504 "bdev_nvme_enable_controller", 00:05:01.504 "bdev_nvme_reset_controller", 00:05:01.504 "bdev_nvme_get_transport_statistics", 00:05:01.504 "bdev_nvme_apply_firmware", 00:05:01.504 "bdev_nvme_detach_controller", 00:05:01.504 "bdev_nvme_get_controllers", 00:05:01.504 "bdev_nvme_attach_controller", 00:05:01.504 "bdev_nvme_set_hotplug", 00:05:01.504 "bdev_nvme_set_options", 00:05:01.504 "bdev_passthru_delete", 00:05:01.504 "bdev_passthru_create", 00:05:01.504 "bdev_lvol_set_parent_bdev", 00:05:01.504 "bdev_lvol_set_parent", 00:05:01.504 "bdev_lvol_check_shallow_copy", 00:05:01.504 "bdev_lvol_start_shallow_copy", 00:05:01.504 "bdev_lvol_grow_lvstore", 00:05:01.504 "bdev_lvol_get_lvols", 00:05:01.504 "bdev_lvol_get_lvstores", 00:05:01.504 "bdev_lvol_delete", 00:05:01.504 "bdev_lvol_set_read_only", 00:05:01.504 "bdev_lvol_resize", 00:05:01.504 "bdev_lvol_decouple_parent", 00:05:01.504 "bdev_lvol_inflate", 00:05:01.504 "bdev_lvol_rename", 00:05:01.504 "bdev_lvol_clone_bdev", 00:05:01.504 "bdev_lvol_clone", 00:05:01.504 "bdev_lvol_snapshot", 00:05:01.504 "bdev_lvol_create", 00:05:01.504 "bdev_lvol_delete_lvstore", 00:05:01.504 "bdev_lvol_rename_lvstore", 00:05:01.504 "bdev_lvol_create_lvstore", 00:05:01.504 "bdev_raid_set_options", 00:05:01.504 "bdev_raid_remove_base_bdev", 00:05:01.504 "bdev_raid_add_base_bdev", 00:05:01.504 "bdev_raid_delete", 00:05:01.504 "bdev_raid_create", 00:05:01.504 "bdev_raid_get_bdevs", 00:05:01.504 "bdev_error_inject_error", 00:05:01.504 "bdev_error_delete", 00:05:01.504 "bdev_error_create", 00:05:01.504 "bdev_split_delete", 00:05:01.504 "bdev_split_create", 00:05:01.504 "bdev_delay_delete", 00:05:01.504 "bdev_delay_create", 00:05:01.504 "bdev_delay_update_latency", 00:05:01.504 "bdev_zone_block_delete", 00:05:01.504 "bdev_zone_block_create", 00:05:01.504 "blobfs_create", 00:05:01.504 "blobfs_detect", 00:05:01.504 "blobfs_set_cache_size", 00:05:01.504 "bdev_aio_delete", 00:05:01.504 "bdev_aio_rescan", 00:05:01.504 "bdev_aio_create", 00:05:01.504 "bdev_ftl_set_property", 00:05:01.504 "bdev_ftl_get_properties", 00:05:01.504 "bdev_ftl_get_stats", 00:05:01.504 "bdev_ftl_unmap", 00:05:01.504 "bdev_ftl_unload", 00:05:01.504 "bdev_ftl_delete", 00:05:01.504 "bdev_ftl_load", 00:05:01.504 "bdev_ftl_create", 00:05:01.504 "bdev_virtio_attach_controller", 00:05:01.504 "bdev_virtio_scsi_get_devices", 00:05:01.504 "bdev_virtio_detach_controller", 00:05:01.504 "bdev_virtio_blk_set_hotplug", 00:05:01.504 "bdev_iscsi_delete", 00:05:01.504 "bdev_iscsi_create", 00:05:01.504 "bdev_iscsi_set_options", 00:05:01.504 "accel_error_inject_error", 00:05:01.504 "ioat_scan_accel_module", 00:05:01.504 "dsa_scan_accel_module", 00:05:01.504 "iaa_scan_accel_module", 00:05:01.504 "vfu_virtio_create_fs_endpoint", 00:05:01.504 "vfu_virtio_create_scsi_endpoint", 00:05:01.504 "vfu_virtio_scsi_remove_target", 00:05:01.504 "vfu_virtio_scsi_add_target", 00:05:01.504 "vfu_virtio_create_blk_endpoint", 00:05:01.504 "vfu_virtio_delete_endpoint", 00:05:01.504 "keyring_file_remove_key", 00:05:01.504 "keyring_file_add_key", 00:05:01.504 "keyring_linux_set_options", 00:05:01.504 "fsdev_aio_delete", 00:05:01.504 "fsdev_aio_create", 00:05:01.504 "iscsi_get_histogram", 00:05:01.504 "iscsi_enable_histogram", 00:05:01.504 "iscsi_set_options", 00:05:01.504 "iscsi_get_auth_groups", 00:05:01.504 "iscsi_auth_group_remove_secret", 00:05:01.504 "iscsi_auth_group_add_secret", 00:05:01.504 "iscsi_delete_auth_group", 00:05:01.504 "iscsi_create_auth_group", 00:05:01.504 "iscsi_set_discovery_auth", 00:05:01.504 "iscsi_get_options", 00:05:01.504 "iscsi_target_node_request_logout", 00:05:01.504 "iscsi_target_node_set_redirect", 00:05:01.504 "iscsi_target_node_set_auth", 00:05:01.504 "iscsi_target_node_add_lun", 00:05:01.504 "iscsi_get_stats", 00:05:01.504 "iscsi_get_connections", 00:05:01.504 "iscsi_portal_group_set_auth", 00:05:01.504 "iscsi_start_portal_group", 00:05:01.504 "iscsi_delete_portal_group", 00:05:01.504 "iscsi_create_portal_group", 00:05:01.504 "iscsi_get_portal_groups", 00:05:01.504 "iscsi_delete_target_node", 00:05:01.504 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.504 "iscsi_target_node_add_pg_ig_maps", 00:05:01.504 "iscsi_create_target_node", 00:05:01.504 "iscsi_get_target_nodes", 00:05:01.504 "iscsi_delete_initiator_group", 00:05:01.504 "iscsi_initiator_group_remove_initiators", 00:05:01.504 "iscsi_initiator_group_add_initiators", 00:05:01.504 "iscsi_create_initiator_group", 00:05:01.504 "iscsi_get_initiator_groups", 00:05:01.504 "nvmf_set_crdt", 00:05:01.504 "nvmf_set_config", 00:05:01.504 "nvmf_set_max_subsystems", 00:05:01.504 "nvmf_stop_mdns_prr", 00:05:01.505 "nvmf_publish_mdns_prr", 00:05:01.505 "nvmf_subsystem_get_listeners", 00:05:01.505 "nvmf_subsystem_get_qpairs", 00:05:01.505 "nvmf_subsystem_get_controllers", 00:05:01.505 "nvmf_get_stats", 00:05:01.505 "nvmf_get_transports", 00:05:01.505 "nvmf_create_transport", 00:05:01.505 "nvmf_get_targets", 00:05:01.505 "nvmf_delete_target", 00:05:01.505 "nvmf_create_target", 00:05:01.505 "nvmf_subsystem_allow_any_host", 00:05:01.505 "nvmf_subsystem_set_keys", 00:05:01.505 "nvmf_subsystem_remove_host", 00:05:01.505 "nvmf_subsystem_add_host", 00:05:01.505 "nvmf_ns_remove_host", 00:05:01.505 "nvmf_ns_add_host", 00:05:01.505 "nvmf_subsystem_remove_ns", 00:05:01.505 "nvmf_subsystem_set_ns_ana_group", 00:05:01.505 "nvmf_subsystem_add_ns", 00:05:01.505 "nvmf_subsystem_listener_set_ana_state", 00:05:01.505 "nvmf_discovery_get_referrals", 00:05:01.505 "nvmf_discovery_remove_referral", 00:05:01.505 "nvmf_discovery_add_referral", 00:05:01.505 "nvmf_subsystem_remove_listener", 00:05:01.505 "nvmf_subsystem_add_listener", 00:05:01.505 "nvmf_delete_subsystem", 00:05:01.505 "nvmf_create_subsystem", 00:05:01.505 "nvmf_get_subsystems", 00:05:01.505 "env_dpdk_get_mem_stats", 00:05:01.505 "nbd_get_disks", 00:05:01.505 "nbd_stop_disk", 00:05:01.505 "nbd_start_disk", 00:05:01.505 "ublk_recover_disk", 00:05:01.505 "ublk_get_disks", 00:05:01.505 "ublk_stop_disk", 00:05:01.505 "ublk_start_disk", 00:05:01.505 "ublk_destroy_target", 00:05:01.505 "ublk_create_target", 00:05:01.505 "virtio_blk_create_transport", 00:05:01.505 "virtio_blk_get_transports", 00:05:01.505 "vhost_controller_set_coalescing", 00:05:01.505 "vhost_get_controllers", 00:05:01.505 "vhost_delete_controller", 00:05:01.505 "vhost_create_blk_controller", 00:05:01.505 "vhost_scsi_controller_remove_target", 00:05:01.505 "vhost_scsi_controller_add_target", 00:05:01.505 "vhost_start_scsi_controller", 00:05:01.505 "vhost_create_scsi_controller", 00:05:01.505 "thread_set_cpumask", 00:05:01.505 "scheduler_set_options", 00:05:01.505 "framework_get_governor", 00:05:01.505 "framework_get_scheduler", 00:05:01.505 "framework_set_scheduler", 00:05:01.505 "framework_get_reactors", 00:05:01.505 "thread_get_io_channels", 00:05:01.505 "thread_get_pollers", 00:05:01.505 "thread_get_stats", 00:05:01.505 "framework_monitor_context_switch", 00:05:01.505 "spdk_kill_instance", 00:05:01.505 "log_enable_timestamps", 00:05:01.505 "log_get_flags", 00:05:01.505 "log_clear_flag", 00:05:01.505 "log_set_flag", 00:05:01.505 "log_get_level", 00:05:01.505 "log_set_level", 00:05:01.505 "log_get_print_level", 00:05:01.505 "log_set_print_level", 00:05:01.505 "framework_enable_cpumask_locks", 00:05:01.505 "framework_disable_cpumask_locks", 00:05:01.505 "framework_wait_init", 00:05:01.505 "framework_start_init", 00:05:01.505 "scsi_get_devices", 00:05:01.505 "bdev_get_histogram", 00:05:01.505 "bdev_enable_histogram", 00:05:01.505 "bdev_set_qos_limit", 00:05:01.505 "bdev_set_qd_sampling_period", 00:05:01.505 "bdev_get_bdevs", 00:05:01.505 "bdev_reset_iostat", 00:05:01.505 "bdev_get_iostat", 00:05:01.505 "bdev_examine", 00:05:01.505 "bdev_wait_for_examine", 00:05:01.505 "bdev_set_options", 00:05:01.505 "accel_get_stats", 00:05:01.505 "accel_set_options", 00:05:01.505 "accel_set_driver", 00:05:01.505 "accel_crypto_key_destroy", 00:05:01.505 "accel_crypto_keys_get", 00:05:01.505 "accel_crypto_key_create", 00:05:01.505 "accel_assign_opc", 00:05:01.505 "accel_get_module_info", 00:05:01.505 "accel_get_opc_assignments", 00:05:01.505 "vmd_rescan", 00:05:01.505 "vmd_remove_device", 00:05:01.505 "vmd_enable", 00:05:01.505 "sock_get_default_impl", 00:05:01.505 "sock_set_default_impl", 00:05:01.505 "sock_impl_set_options", 00:05:01.505 "sock_impl_get_options", 00:05:01.505 "iobuf_get_stats", 00:05:01.505 "iobuf_set_options", 00:05:01.505 "keyring_get_keys", 00:05:01.505 "vfu_tgt_set_base_path", 00:05:01.505 "framework_get_pci_devices", 00:05:01.505 "framework_get_config", 00:05:01.505 "framework_get_subsystems", 00:05:01.505 "fsdev_set_opts", 00:05:01.505 "fsdev_get_opts", 00:05:01.505 "trace_get_info", 00:05:01.505 "trace_get_tpoint_group_mask", 00:05:01.505 "trace_disable_tpoint_group", 00:05:01.505 "trace_enable_tpoint_group", 00:05:01.505 "trace_clear_tpoint_mask", 00:05:01.505 "trace_set_tpoint_mask", 00:05:01.505 "notify_get_notifications", 00:05:01.505 "notify_get_types", 00:05:01.505 "spdk_get_version", 00:05:01.505 "rpc_get_methods" 00:05:01.505 ] 00:05:01.505 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.505 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.505 12:45:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 779069 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 779069 ']' 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 779069 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779069 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779069' 00:05:01.505 killing process with pid 779069 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 779069 00:05:01.505 12:45:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 779069 00:05:01.764 00:05:01.764 real 0m1.118s 00:05:01.764 user 0m1.909s 00:05:01.764 sys 0m0.427s 00:05:01.764 12:45:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.764 12:45:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.764 ************************************ 00:05:01.764 END TEST spdkcli_tcp 00:05:01.764 ************************************ 00:05:02.023 12:45:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.023 12:45:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.023 12:45:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.023 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:05:02.023 ************************************ 00:05:02.023 START TEST dpdk_mem_utility 00:05:02.023 ************************************ 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.023 * Looking for test storage... 00:05:02.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.023 12:45:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.023 --rc genhtml_branch_coverage=1 00:05:02.023 --rc genhtml_function_coverage=1 00:05:02.023 --rc genhtml_legend=1 00:05:02.023 --rc geninfo_all_blocks=1 00:05:02.023 --rc geninfo_unexecuted_blocks=1 00:05:02.023 00:05:02.023 ' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.023 --rc genhtml_branch_coverage=1 00:05:02.023 --rc genhtml_function_coverage=1 00:05:02.023 --rc genhtml_legend=1 00:05:02.023 --rc geninfo_all_blocks=1 00:05:02.023 --rc geninfo_unexecuted_blocks=1 00:05:02.023 00:05:02.023 ' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.023 --rc genhtml_branch_coverage=1 00:05:02.023 --rc genhtml_function_coverage=1 00:05:02.023 --rc genhtml_legend=1 00:05:02.023 --rc geninfo_all_blocks=1 00:05:02.023 --rc geninfo_unexecuted_blocks=1 00:05:02.023 00:05:02.023 ' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.023 --rc genhtml_branch_coverage=1 00:05:02.023 --rc genhtml_function_coverage=1 00:05:02.023 --rc genhtml_legend=1 00:05:02.023 --rc geninfo_all_blocks=1 00:05:02.023 --rc geninfo_unexecuted_blocks=1 00:05:02.023 00:05:02.023 ' 00:05:02.023 12:45:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.023 12:45:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=779317 00:05:02.023 12:45:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 779317 00:05:02.023 12:45:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 779317 ']' 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.023 12:45:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.283 [2024-12-15 12:45:09.967726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:02.283 [2024-12-15 12:45:09.967771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779317 ] 00:05:02.283 [2024-12-15 12:45:10.046203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.283 [2024-12-15 12:45:10.069270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.542 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.542 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:02.542 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.542 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.542 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.542 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.542 { 00:05:02.542 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.542 } 00:05:02.542 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.542 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.542 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:02.542 1 heaps totaling size 818.000000 MiB 00:05:02.542 size: 818.000000 MiB heap id: 0 00:05:02.542 end heaps---------- 00:05:02.543 9 mempools totaling size 603.782043 MiB 00:05:02.543 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.543 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.543 size: 100.555481 MiB name: bdev_io_779317 00:05:02.543 size: 50.003479 MiB name: msgpool_779317 00:05:02.543 size: 36.509338 MiB name: fsdev_io_779317 00:05:02.543 size: 21.763794 MiB name: PDU_Pool 00:05:02.543 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.543 size: 4.133484 MiB name: evtpool_779317 00:05:02.543 size: 0.026123 MiB name: Session_Pool 00:05:02.543 end mempools------- 00:05:02.543 6 memzones totaling size 4.142822 MiB 00:05:02.543 size: 1.000366 MiB name: RG_ring_0_779317 00:05:02.543 size: 1.000366 MiB name: RG_ring_1_779317 00:05:02.543 size: 1.000366 MiB name: RG_ring_4_779317 00:05:02.543 size: 1.000366 MiB name: RG_ring_5_779317 00:05:02.543 size: 0.125366 MiB name: RG_ring_2_779317 00:05:02.543 size: 0.015991 MiB name: RG_ring_3_779317 00:05:02.543 end memzones------- 00:05:02.543 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.543 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:02.543 list of free elements. size: 10.852478 MiB 00:05:02.543 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:02.543 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:02.543 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:02.543 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:02.543 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:02.543 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:02.543 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:02.543 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:02.543 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:02.543 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:02.543 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:02.543 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:02.543 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:02.543 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:02.543 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:02.543 list of standard malloc elements. size: 199.218628 MiB 00:05:02.543 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:02.543 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:02.543 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:02.543 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:02.543 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:02.543 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:02.543 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:02.543 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:02.543 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:02.543 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:02.543 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:02.543 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:02.543 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:02.543 list of memzone associated elements. size: 607.928894 MiB 00:05:02.543 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:02.543 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.543 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:02.543 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.543 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:02.543 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_779317_0 00:05:02.543 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:02.543 associated memzone info: size: 48.002930 MiB name: MP_msgpool_779317_0 00:05:02.543 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:02.543 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_779317_0 00:05:02.543 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:02.543 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.543 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:02.543 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.543 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:02.543 associated memzone info: size: 3.000122 MiB name: MP_evtpool_779317_0 00:05:02.543 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:02.543 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_779317 00:05:02.543 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:02.543 associated memzone info: size: 1.007996 MiB name: MP_evtpool_779317 00:05:02.543 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:02.543 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.543 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:02.543 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.543 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:02.543 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.543 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:02.543 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.543 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:02.543 associated memzone info: size: 1.000366 MiB name: RG_ring_0_779317 00:05:02.543 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:02.543 associated memzone info: size: 1.000366 MiB name: RG_ring_1_779317 00:05:02.543 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:02.543 associated memzone info: size: 1.000366 MiB name: RG_ring_4_779317 00:05:02.543 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:02.543 associated memzone info: size: 1.000366 MiB name: RG_ring_5_779317 00:05:02.543 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:02.543 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_779317 00:05:02.543 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:02.543 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_779317 00:05:02.543 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:02.543 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.543 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:02.543 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.543 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:02.543 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.543 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:02.543 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_779317 00:05:02.543 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:02.543 associated memzone info: size: 0.125366 MiB name: RG_ring_2_779317 00:05:02.543 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:02.543 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.543 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:02.543 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.543 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:02.543 associated memzone info: size: 0.015991 MiB name: RG_ring_3_779317 00:05:02.543 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:02.543 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.543 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:02.543 associated memzone info: size: 0.000183 MiB name: MP_msgpool_779317 00:05:02.543 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:02.543 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_779317 00:05:02.543 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:02.543 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_779317 00:05:02.543 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:02.543 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.543 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.543 12:45:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 779317 00:05:02.543 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 779317 ']' 00:05:02.543 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 779317 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779317 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779317' 00:05:02.544 killing process with pid 779317 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 779317 00:05:02.544 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 779317 00:05:03.110 00:05:03.110 real 0m0.987s 00:05:03.110 user 0m0.921s 00:05:03.110 sys 0m0.408s 00:05:03.110 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.110 12:45:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 ************************************ 00:05:03.110 END TEST dpdk_mem_utility 00:05:03.110 ************************************ 00:05:03.110 12:45:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.110 12:45:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.110 12:45:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.110 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 ************************************ 00:05:03.110 START TEST event 00:05:03.110 ************************************ 00:05:03.110 12:45:10 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.110 * Looking for test storage... 00:05:03.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.110 12:45:10 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.110 12:45:10 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.110 12:45:10 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.110 12:45:10 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.110 12:45:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.110 12:45:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.110 12:45:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.110 12:45:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.110 12:45:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.110 12:45:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.110 12:45:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.110 12:45:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.110 12:45:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.110 12:45:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.110 12:45:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.110 12:45:10 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.110 12:45:10 event -- scripts/common.sh@345 -- # : 1 00:05:03.110 12:45:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.110 12:45:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.110 12:45:10 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.110 12:45:10 event -- scripts/common.sh@353 -- # local d=1 00:05:03.110 12:45:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.110 12:45:10 event -- scripts/common.sh@355 -- # echo 1 00:05:03.110 12:45:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.110 12:45:10 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.110 12:45:10 event -- scripts/common.sh@353 -- # local d=2 00:05:03.110 12:45:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.111 12:45:10 event -- scripts/common.sh@355 -- # echo 2 00:05:03.111 12:45:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.111 12:45:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.111 12:45:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.111 12:45:10 event -- scripts/common.sh@368 -- # return 0 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.111 --rc genhtml_branch_coverage=1 00:05:03.111 --rc genhtml_function_coverage=1 00:05:03.111 --rc genhtml_legend=1 00:05:03.111 --rc geninfo_all_blocks=1 00:05:03.111 --rc geninfo_unexecuted_blocks=1 00:05:03.111 00:05:03.111 ' 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.111 --rc genhtml_branch_coverage=1 00:05:03.111 --rc genhtml_function_coverage=1 00:05:03.111 --rc genhtml_legend=1 00:05:03.111 --rc geninfo_all_blocks=1 00:05:03.111 --rc geninfo_unexecuted_blocks=1 00:05:03.111 00:05:03.111 ' 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.111 --rc genhtml_branch_coverage=1 00:05:03.111 --rc genhtml_function_coverage=1 00:05:03.111 --rc genhtml_legend=1 00:05:03.111 --rc geninfo_all_blocks=1 00:05:03.111 --rc geninfo_unexecuted_blocks=1 00:05:03.111 00:05:03.111 ' 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.111 --rc genhtml_branch_coverage=1 00:05:03.111 --rc genhtml_function_coverage=1 00:05:03.111 --rc genhtml_legend=1 00:05:03.111 --rc geninfo_all_blocks=1 00:05:03.111 --rc geninfo_unexecuted_blocks=1 00:05:03.111 00:05:03.111 ' 00:05:03.111 12:45:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:03.111 12:45:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.111 12:45:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.111 12:45:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.111 12:45:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 ************************************ 00:05:03.111 START TEST event_perf 00:05:03.111 ************************************ 00:05:03.111 12:45:11 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.369 Running I/O for 1 seconds...[2024-12-15 12:45:11.029352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:03.369 [2024-12-15 12:45:11.029420] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779604 ] 00:05:03.369 [2024-12-15 12:45:11.109120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.369 [2024-12-15 12:45:11.135121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.369 [2024-12-15 12:45:11.135137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.369 [2024-12-15 12:45:11.135228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.369 [2024-12-15 12:45:11.135229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.303 Running I/O for 1 seconds... 00:05:04.303 lcore 0: 205899 00:05:04.303 lcore 1: 205897 00:05:04.303 lcore 2: 205897 00:05:04.303 lcore 3: 205899 00:05:04.303 done. 00:05:04.303 00:05:04.303 real 0m1.161s 00:05:04.303 user 0m4.072s 00:05:04.303 sys 0m0.081s 00:05:04.303 12:45:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.303 12:45:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.303 ************************************ 00:05:04.303 END TEST event_perf 00:05:04.303 ************************************ 00:05:04.303 12:45:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.303 12:45:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.303 12:45:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.303 12:45:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.562 ************************************ 00:05:04.562 START TEST event_reactor 00:05:04.562 ************************************ 00:05:04.562 12:45:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:04.562 [2024-12-15 12:45:12.262178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:04.562 [2024-12-15 12:45:12.262243] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779853 ] 00:05:04.562 [2024-12-15 12:45:12.341749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.562 [2024-12-15 12:45:12.363192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.498 test_start 00:05:05.498 oneshot 00:05:05.498 tick 100 00:05:05.498 tick 100 00:05:05.498 tick 250 00:05:05.498 tick 100 00:05:05.498 tick 100 00:05:05.498 tick 250 00:05:05.498 tick 100 00:05:05.498 tick 500 00:05:05.498 tick 100 00:05:05.498 tick 100 00:05:05.498 tick 250 00:05:05.498 tick 100 00:05:05.498 tick 100 00:05:05.498 test_end 00:05:05.498 00:05:05.498 real 0m1.154s 00:05:05.498 user 0m1.071s 00:05:05.498 sys 0m0.079s 00:05:05.498 12:45:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.498 12:45:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.498 ************************************ 00:05:05.498 END TEST event_reactor 00:05:05.498 ************************************ 00:05:05.757 12:45:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.757 12:45:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.757 12:45:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.757 12:45:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.757 ************************************ 00:05:05.757 START TEST event_reactor_perf 00:05:05.757 ************************************ 00:05:05.757 12:45:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.757 [2024-12-15 12:45:13.486687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:05.757 [2024-12-15 12:45:13.486755] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780095 ] 00:05:05.757 [2024-12-15 12:45:13.567055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.757 [2024-12-15 12:45:13.588238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.134 test_start 00:05:07.134 test_end 00:05:07.134 Performance: 515263 events per second 00:05:07.134 00:05:07.134 real 0m1.157s 00:05:07.134 user 0m1.075s 00:05:07.134 sys 0m0.077s 00:05:07.134 12:45:14 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.134 12:45:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.134 ************************************ 00:05:07.134 END TEST event_reactor_perf 00:05:07.134 ************************************ 00:05:07.134 12:45:14 event -- event/event.sh@49 -- # uname -s 00:05:07.134 12:45:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:07.134 12:45:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:07.134 12:45:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.134 12:45:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.135 12:45:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.135 ************************************ 00:05:07.135 START TEST event_scheduler 00:05:07.135 ************************************ 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:07.135 * Looking for test storage... 00:05:07.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.135 12:45:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.135 --rc genhtml_branch_coverage=1 00:05:07.135 --rc genhtml_function_coverage=1 00:05:07.135 --rc genhtml_legend=1 00:05:07.135 --rc geninfo_all_blocks=1 00:05:07.135 --rc geninfo_unexecuted_blocks=1 00:05:07.135 00:05:07.135 ' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.135 --rc genhtml_branch_coverage=1 00:05:07.135 --rc genhtml_function_coverage=1 00:05:07.135 --rc genhtml_legend=1 00:05:07.135 --rc geninfo_all_blocks=1 00:05:07.135 --rc geninfo_unexecuted_blocks=1 00:05:07.135 00:05:07.135 ' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.135 --rc genhtml_branch_coverage=1 00:05:07.135 --rc genhtml_function_coverage=1 00:05:07.135 --rc genhtml_legend=1 00:05:07.135 --rc geninfo_all_blocks=1 00:05:07.135 --rc geninfo_unexecuted_blocks=1 00:05:07.135 00:05:07.135 ' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.135 --rc genhtml_branch_coverage=1 00:05:07.135 --rc genhtml_function_coverage=1 00:05:07.135 --rc genhtml_legend=1 00:05:07.135 --rc geninfo_all_blocks=1 00:05:07.135 --rc geninfo_unexecuted_blocks=1 00:05:07.135 00:05:07.135 ' 00:05:07.135 12:45:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:07.135 12:45:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=780372 00:05:07.135 12:45:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.135 12:45:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:07.135 12:45:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 780372 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 780372 ']' 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.135 12:45:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.135 [2024-12-15 12:45:14.920349] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:07.135 [2024-12-15 12:45:14.920391] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780372 ] 00:05:07.135 [2024-12-15 12:45:14.994261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.135 [2024-12-15 12:45:15.020403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.135 [2024-12-15 12:45:15.020512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.135 [2024-12-15 12:45:15.020622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.135 [2024-12-15 12:45:15.020623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:07.397 12:45:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.397 [2024-12-15 12:45:15.077228] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:07.397 [2024-12-15 12:45:15.077244] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.397 [2024-12-15 12:45:15.077253] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.397 [2024-12-15 12:45:15.077259] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.397 [2024-12-15 12:45:15.077264] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.397 12:45:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.397 [2024-12-15 12:45:15.147561] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.397 12:45:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.397 12:45:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.397 ************************************ 00:05:07.397 START TEST scheduler_create_thread 00:05:07.397 ************************************ 00:05:07.397 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:07.397 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 2 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 3 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 4 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 5 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 6 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 7 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 8 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 9 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 10 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.398 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.966 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.966 12:45:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.966 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.966 12:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.344 12:45:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.344 12:45:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.344 12:45:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.344 12:45:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.344 12:45:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.721 12:45:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.721 00:05:10.721 real 0m3.103s 00:05:10.721 user 0m0.024s 00:05:10.721 sys 0m0.006s 00:05:10.721 12:45:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.721 12:45:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.721 ************************************ 00:05:10.721 END TEST scheduler_create_thread 00:05:10.721 ************************************ 00:05:10.721 12:45:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.721 12:45:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 780372 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 780372 ']' 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 780372 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780372 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780372' 00:05:10.721 killing process with pid 780372 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 780372 00:05:10.721 12:45:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 780372 00:05:10.980 [2024-12-15 12:45:18.666721] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.980 00:05:10.980 real 0m4.148s 00:05:10.980 user 0m6.680s 00:05:10.980 sys 0m0.367s 00:05:10.980 12:45:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.980 12:45:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.980 ************************************ 00:05:10.980 END TEST event_scheduler 00:05:10.980 ************************************ 00:05:10.980 12:45:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.238 12:45:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.238 12:45:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.238 12:45:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.238 12:45:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.238 ************************************ 00:05:11.238 START TEST app_repeat 00:05:11.238 ************************************ 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=781092 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 781092' 00:05:11.238 Process app_repeat pid: 781092 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.238 spdk_app_start Round 0 00:05:11.238 12:45:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781092 /var/tmp/spdk-nbd.sock 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781092 ']' 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.238 12:45:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.238 [2024-12-15 12:45:18.961667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:11.238 [2024-12-15 12:45:18.961720] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781092 ] 00:05:11.238 [2024-12-15 12:45:19.037952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.238 [2024-12-15 12:45:19.060120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.238 [2024-12-15 12:45:19.060122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.512 12:45:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.512 12:45:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.512 12:45:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.512 Malloc0 00:05:11.512 12:45:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.771 Malloc1 00:05:11.771 12:45:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.771 12:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.030 /dev/nbd0 00:05:12.030 12:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.030 12:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.030 1+0 records in 00:05:12.030 1+0 records out 00:05:12.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234482 s, 17.5 MB/s 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.030 12:45:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.030 12:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.030 12:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.030 12:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.289 /dev/nbd1 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.289 1+0 records in 00:05:12.289 1+0 records out 00:05:12.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194417 s, 21.1 MB/s 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.289 12:45:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.289 12:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.548 { 00:05:12.548 "nbd_device": "/dev/nbd0", 00:05:12.548 "bdev_name": "Malloc0" 00:05:12.548 }, 00:05:12.548 { 00:05:12.548 "nbd_device": "/dev/nbd1", 00:05:12.548 "bdev_name": "Malloc1" 00:05:12.548 } 00:05:12.548 ]' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.548 { 00:05:12.548 "nbd_device": "/dev/nbd0", 00:05:12.548 "bdev_name": "Malloc0" 00:05:12.548 }, 00:05:12.548 { 00:05:12.548 "nbd_device": "/dev/nbd1", 00:05:12.548 "bdev_name": "Malloc1" 00:05:12.548 } 00:05:12.548 ]' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.548 /dev/nbd1' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.548 /dev/nbd1' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.548 256+0 records in 00:05:12.548 256+0 records out 00:05:12.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108062 s, 97.0 MB/s 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.548 256+0 records in 00:05:12.548 256+0 records out 00:05:12.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137449 s, 76.3 MB/s 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.548 256+0 records in 00:05:12.548 256+0 records out 00:05:12.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148991 s, 70.4 MB/s 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.548 12:45:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.807 12:45:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.066 12:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.325 12:45:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.325 12:45:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.584 12:45:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.584 [2024-12-15 12:45:21.448474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.584 [2024-12-15 12:45:21.468360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.584 [2024-12-15 12:45:21.468361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.843 [2024-12-15 12:45:21.507981] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.843 [2024-12-15 12:45:21.508022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.133 spdk_app_start Round 1 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781092 /var/tmp/spdk-nbd.sock 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781092 ']' 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.133 12:45:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.133 Malloc0 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.133 Malloc1 00:05:17.133 12:45:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.133 12:45:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.392 /dev/nbd0 00:05:17.392 12:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.392 12:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.392 1+0 records in 00:05:17.392 1+0 records out 00:05:17.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241989 s, 16.9 MB/s 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.392 12:45:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.392 12:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.392 12:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.392 12:45:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.651 /dev/nbd1 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.651 1+0 records in 00:05:17.651 1+0 records out 00:05:17.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240667 s, 17.0 MB/s 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.651 12:45:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.651 12:45:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.910 { 00:05:17.910 "nbd_device": "/dev/nbd0", 00:05:17.910 "bdev_name": "Malloc0" 00:05:17.910 }, 00:05:17.910 { 00:05:17.910 "nbd_device": "/dev/nbd1", 00:05:17.910 "bdev_name": "Malloc1" 00:05:17.910 } 00:05:17.910 ]' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.910 { 00:05:17.910 "nbd_device": "/dev/nbd0", 00:05:17.910 "bdev_name": "Malloc0" 00:05:17.910 }, 00:05:17.910 { 00:05:17.910 "nbd_device": "/dev/nbd1", 00:05:17.910 "bdev_name": "Malloc1" 00:05:17.910 } 00:05:17.910 ]' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.910 /dev/nbd1' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.910 /dev/nbd1' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.910 256+0 records in 00:05:17.910 256+0 records out 00:05:17.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108375 s, 96.8 MB/s 00:05:17.910 12:45:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.911 256+0 records in 00:05:17.911 256+0 records out 00:05:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142222 s, 73.7 MB/s 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.911 256+0 records in 00:05:17.911 256+0 records out 00:05:17.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151586 s, 69.2 MB/s 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.911 12:45:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.171 12:45:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.430 12:45:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.689 12:45:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.689 12:45:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.948 12:45:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.948 [2024-12-15 12:45:26.766999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.948 [2024-12-15 12:45:26.786709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.948 [2024-12-15 12:45:26.786711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.948 [2024-12-15 12:45:26.827711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.948 [2024-12-15 12:45:26.827750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.236 12:45:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.236 12:45:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.236 spdk_app_start Round 2 00:05:22.236 12:45:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781092 /var/tmp/spdk-nbd.sock 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781092 ']' 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.236 12:45:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.236 12:45:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.236 Malloc0 00:05:22.236 12:45:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.496 Malloc1 00:05:22.496 12:45:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.496 12:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.755 /dev/nbd0 00:05:22.755 12:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.755 12:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.755 1+0 records in 00:05:22.755 1+0 records out 00:05:22.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231433 s, 17.7 MB/s 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.755 12:45:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.755 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.755 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.755 12:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.015 /dev/nbd1 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.015 1+0 records in 00:05:23.015 1+0 records out 00:05:23.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260189 s, 15.7 MB/s 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.015 12:45:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.015 12:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.275 { 00:05:23.275 "nbd_device": "/dev/nbd0", 00:05:23.275 "bdev_name": "Malloc0" 00:05:23.275 }, 00:05:23.275 { 00:05:23.275 "nbd_device": "/dev/nbd1", 00:05:23.275 "bdev_name": "Malloc1" 00:05:23.275 } 00:05:23.275 ]' 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.275 { 00:05:23.275 "nbd_device": "/dev/nbd0", 00:05:23.275 "bdev_name": "Malloc0" 00:05:23.275 }, 00:05:23.275 { 00:05:23.275 "nbd_device": "/dev/nbd1", 00:05:23.275 "bdev_name": "Malloc1" 00:05:23.275 } 00:05:23.275 ]' 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.275 /dev/nbd1' 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.275 /dev/nbd1' 00:05:23.275 12:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.275 256+0 records in 00:05:23.275 256+0 records out 00:05:23.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102536 s, 102 MB/s 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.275 256+0 records in 00:05:23.275 256+0 records out 00:05:23.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138501 s, 75.7 MB/s 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.275 256+0 records in 00:05:23.275 256+0 records out 00:05:23.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151808 s, 69.1 MB/s 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.275 12:45:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.534 12:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.534 12:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.534 12:45:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.535 12:45:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.793 12:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.052 12:45:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.052 12:45:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.052 12:45:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.312 [2024-12-15 12:45:32.100522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.312 [2024-12-15 12:45:32.120527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.312 [2024-12-15 12:45:32.120529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.312 [2024-12-15 12:45:32.160986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.312 [2024-12-15 12:45:32.161027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.607 12:45:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 781092 /var/tmp/spdk-nbd.sock 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781092 ']' 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.607 12:45:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.607 12:45:35 event.app_repeat -- event/event.sh@39 -- # killprocess 781092 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 781092 ']' 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 781092 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781092 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781092' 00:05:27.607 killing process with pid 781092 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@973 -- # kill 781092 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@978 -- # wait 781092 00:05:27.607 spdk_app_start is called in Round 0. 00:05:27.607 Shutdown signal received, stop current app iteration 00:05:27.607 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:27.607 spdk_app_start is called in Round 1. 00:05:27.607 Shutdown signal received, stop current app iteration 00:05:27.607 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:27.607 spdk_app_start is called in Round 2. 00:05:27.607 Shutdown signal received, stop current app iteration 00:05:27.607 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:27.607 spdk_app_start is called in Round 3. 00:05:27.607 Shutdown signal received, stop current app iteration 00:05:27.607 12:45:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.607 12:45:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.607 00:05:27.607 real 0m16.407s 00:05:27.607 user 0m36.223s 00:05:27.607 sys 0m2.509s 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.607 12:45:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.607 ************************************ 00:05:27.607 END TEST app_repeat 00:05:27.607 ************************************ 00:05:27.608 12:45:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.608 12:45:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.608 12:45:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.608 12:45:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.608 12:45:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.608 ************************************ 00:05:27.608 START TEST cpu_locks 00:05:27.608 ************************************ 00:05:27.608 12:45:35 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.608 * Looking for test storage... 00:05:27.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:27.608 12:45:35 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.608 12:45:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.608 12:45:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.867 12:45:35 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.868 12:45:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.868 --rc genhtml_branch_coverage=1 00:05:27.868 --rc genhtml_function_coverage=1 00:05:27.868 --rc genhtml_legend=1 00:05:27.868 --rc geninfo_all_blocks=1 00:05:27.868 --rc geninfo_unexecuted_blocks=1 00:05:27.868 00:05:27.868 ' 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.868 --rc genhtml_branch_coverage=1 00:05:27.868 --rc genhtml_function_coverage=1 00:05:27.868 --rc genhtml_legend=1 00:05:27.868 --rc geninfo_all_blocks=1 00:05:27.868 --rc geninfo_unexecuted_blocks=1 00:05:27.868 00:05:27.868 ' 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.868 --rc genhtml_branch_coverage=1 00:05:27.868 --rc genhtml_function_coverage=1 00:05:27.868 --rc genhtml_legend=1 00:05:27.868 --rc geninfo_all_blocks=1 00:05:27.868 --rc geninfo_unexecuted_blocks=1 00:05:27.868 00:05:27.868 ' 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.868 --rc genhtml_branch_coverage=1 00:05:27.868 --rc genhtml_function_coverage=1 00:05:27.868 --rc genhtml_legend=1 00:05:27.868 --rc geninfo_all_blocks=1 00:05:27.868 --rc geninfo_unexecuted_blocks=1 00:05:27.868 00:05:27.868 ' 00:05:27.868 12:45:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.868 12:45:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.868 12:45:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.868 12:45:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.868 12:45:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.868 ************************************ 00:05:27.868 START TEST default_locks 00:05:27.868 ************************************ 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=784022 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 784022 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784022 ']' 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.868 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.868 [2024-12-15 12:45:35.659324] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:27.868 [2024-12-15 12:45:35.659365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784022 ] 00:05:27.868 [2024-12-15 12:45:35.735912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.868 [2024-12-15 12:45:35.758609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.128 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.128 12:45:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:28.128 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 784022 00:05:28.128 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 784022 00:05:28.128 12:45:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.387 lslocks: write error 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 784022 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 784022 ']' 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 784022 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784022 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784022' 00:05:28.387 killing process with pid 784022 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 784022 00:05:28.387 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 784022 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 784022 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784022 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 784022 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784022 ']' 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784022) - No such process 00:05:28.647 ERROR: process (pid: 784022) is no longer running 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.647 00:05:28.647 real 0m0.904s 00:05:28.647 user 0m0.854s 00:05:28.647 sys 0m0.430s 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.647 12:45:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.647 ************************************ 00:05:28.647 END TEST default_locks 00:05:28.647 ************************************ 00:05:28.647 12:45:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.647 12:45:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.647 12:45:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.647 12:45:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.907 ************************************ 00:05:28.907 START TEST default_locks_via_rpc 00:05:28.907 ************************************ 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=784271 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 784271 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784271 ']' 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.907 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.907 [2024-12-15 12:45:36.629383] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:28.907 [2024-12-15 12:45:36.629422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784271 ] 00:05:28.907 [2024-12-15 12:45:36.704874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.907 [2024-12-15 12:45:36.727507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 784271 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 784271 00:05:29.166 12:45:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 784271 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 784271 ']' 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 784271 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784271 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784271' 00:05:29.426 killing process with pid 784271 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 784271 00:05:29.426 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 784271 00:05:29.685 00:05:29.685 real 0m0.884s 00:05:29.685 user 0m0.828s 00:05:29.685 sys 0m0.428s 00:05:29.685 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.685 12:45:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.685 ************************************ 00:05:29.685 END TEST default_locks_via_rpc 00:05:29.685 ************************************ 00:05:29.685 12:45:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.685 12:45:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.685 12:45:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.685 12:45:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.685 ************************************ 00:05:29.685 START TEST non_locking_app_on_locked_coremask 00:05:29.685 ************************************ 00:05:29.685 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:29.685 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=784520 00:05:29.685 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 784520 /var/tmp/spdk.sock 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784520 ']' 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.686 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.686 [2024-12-15 12:45:37.580348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:29.686 [2024-12-15 12:45:37.580385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784520 ] 00:05:29.945 [2024-12-15 12:45:37.654873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.945 [2024-12-15 12:45:37.677582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=784530 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 784530 /var/tmp/spdk2.sock 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784530 ']' 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.204 12:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.204 [2024-12-15 12:45:37.927711] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:30.204 [2024-12-15 12:45:37.927759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784530 ] 00:05:30.204 [2024-12-15 12:45:38.018992] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.204 [2024-12-15 12:45:38.019021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.204 [2024-12-15 12:45:38.062650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.143 12:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.143 12:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.143 12:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 784520 00:05:31.143 12:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784520 00:05:31.143 12:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.403 lslocks: write error 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 784520 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784520 ']' 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784520 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.403 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784520 00:05:31.663 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.663 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.663 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784520' 00:05:31.663 killing process with pid 784520 00:05:31.663 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784520 00:05:31.663 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784520 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 784530 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784530 ']' 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784530 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784530 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784530' 00:05:32.233 killing process with pid 784530 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784530 00:05:32.233 12:45:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784530 00:05:32.493 00:05:32.493 real 0m2.722s 00:05:32.493 user 0m2.885s 00:05:32.493 sys 0m0.931s 00:05:32.493 12:45:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.493 12:45:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.493 ************************************ 00:05:32.493 END TEST non_locking_app_on_locked_coremask 00:05:32.493 ************************************ 00:05:32.493 12:45:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:32.493 12:45:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.493 12:45:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.493 12:45:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.493 ************************************ 00:05:32.493 START TEST locking_app_on_unlocked_coremask 00:05:32.493 ************************************ 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=785006 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 785006 /var/tmp/spdk.sock 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785006 ']' 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.493 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.493 [2024-12-15 12:45:40.374891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:32.493 [2024-12-15 12:45:40.374936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785006 ] 00:05:32.753 [2024-12-15 12:45:40.450439] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.753 [2024-12-15 12:45:40.450465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.753 [2024-12-15 12:45:40.470915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=785012 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 785012 /var/tmp/spdk2.sock 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785012 ']' 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.013 12:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.013 [2024-12-15 12:45:40.736206] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:33.013 [2024-12-15 12:45:40.736253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785012 ] 00:05:33.013 [2024-12-15 12:45:40.827213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.013 [2024-12-15 12:45:40.869431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.956 12:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.956 12:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.956 12:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 785012 00:05:33.956 12:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785012 00:05:33.956 12:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.523 lslocks: write error 00:05:34.523 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 785006 00:05:34.523 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785006 ']' 00:05:34.523 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785006 00:05:34.523 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.523 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785006 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785006' 00:05:34.524 killing process with pid 785006 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785006 00:05:34.524 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785006 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 785012 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785012 ']' 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785012 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785012 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785012' 00:05:35.093 killing process with pid 785012 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785012 00:05:35.093 12:45:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785012 00:05:35.353 00:05:35.353 real 0m2.777s 00:05:35.353 user 0m2.925s 00:05:35.353 sys 0m0.970s 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.353 ************************************ 00:05:35.353 END TEST locking_app_on_unlocked_coremask 00:05:35.353 ************************************ 00:05:35.353 12:45:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.353 12:45:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.353 12:45:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.353 12:45:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.353 ************************************ 00:05:35.353 START TEST locking_app_on_locked_coremask 00:05:35.353 ************************************ 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=785494 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 785494 /var/tmp/spdk.sock 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785494 ']' 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.353 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.353 [2024-12-15 12:45:43.222204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:35.353 [2024-12-15 12:45:43.222247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785494 ] 00:05:35.612 [2024-12-15 12:45:43.293565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.612 [2024-12-15 12:45:43.313352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=785502 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 785502 /var/tmp/spdk2.sock 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785502 /var/tmp/spdk2.sock 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785502 /var/tmp/spdk2.sock 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785502 ']' 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.612 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.613 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.613 12:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.872 [2024-12-15 12:45:43.565230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:35.872 [2024-12-15 12:45:43.565276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785502 ] 00:05:35.872 [2024-12-15 12:45:43.653114] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 785494 has claimed it. 00:05:35.872 [2024-12-15 12:45:43.653153] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785502) - No such process 00:05:36.440 ERROR: process (pid: 785502) is no longer running 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 785494 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785494 00:05:36.440 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.699 lslocks: write error 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 785494 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785494 ']' 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785494 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.699 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785494 00:05:36.959 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.959 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.959 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785494' 00:05:36.959 killing process with pid 785494 00:05:36.959 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785494 00:05:36.959 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785494 00:05:37.218 00:05:37.218 real 0m1.746s 00:05:37.218 user 0m1.890s 00:05:37.218 sys 0m0.591s 00:05:37.218 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.218 12:45:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.218 ************************************ 00:05:37.219 END TEST locking_app_on_locked_coremask 00:05:37.219 ************************************ 00:05:37.219 12:45:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.219 12:45:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.219 12:45:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.219 12:45:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.219 ************************************ 00:05:37.219 START TEST locking_overlapped_coremask 00:05:37.219 ************************************ 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=785778 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 785778 /var/tmp/spdk.sock 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 785778 ']' 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.219 12:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.219 [2024-12-15 12:45:45.034490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:37.219 [2024-12-15 12:45:45.034535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785778 ] 00:05:37.219 [2024-12-15 12:45:45.109359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.478 [2024-12-15 12:45:45.131554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.478 [2024-12-15 12:45:45.131663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.478 [2024-12-15 12:45:45.131663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=785840 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 785840 /var/tmp/spdk2.sock 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785840 /var/tmp/spdk2.sock 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785840 /var/tmp/spdk2.sock 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 785840 ']' 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.478 12:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.738 [2024-12-15 12:45:45.385854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:37.738 [2024-12-15 12:45:45.385902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785840 ] 00:05:37.738 [2024-12-15 12:45:45.479103] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 785778 has claimed it. 00:05:37.738 [2024-12-15 12:45:45.479144] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785840) - No such process 00:05:38.307 ERROR: process (pid: 785840) is no longer running 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 785778 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 785778 ']' 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 785778 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785778 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785778' 00:05:38.307 killing process with pid 785778 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 785778 00:05:38.307 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 785778 00:05:38.566 00:05:38.566 real 0m1.409s 00:05:38.566 user 0m3.936s 00:05:38.567 sys 0m0.392s 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.567 ************************************ 00:05:38.567 END TEST locking_overlapped_coremask 00:05:38.567 ************************************ 00:05:38.567 12:45:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.567 12:45:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.567 12:45:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.567 12:45:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.567 ************************************ 00:05:38.567 START TEST locking_overlapped_coremask_via_rpc 00:05:38.567 ************************************ 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=786035 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 786035 /var/tmp/spdk.sock 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786035 ']' 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.567 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.826 [2024-12-15 12:45:46.510366] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:38.826 [2024-12-15 12:45:46.510414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786035 ] 00:05:38.826 [2024-12-15 12:45:46.585023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.826 [2024-12-15 12:45:46.585049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.826 [2024-12-15 12:45:46.607141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.826 [2024-12-15 12:45:46.607249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.826 [2024-12-15 12:45:46.607250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=786173 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 786173 /var/tmp/spdk2.sock 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786173 ']' 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.086 12:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.086 [2024-12-15 12:45:46.866193] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:39.086 [2024-12-15 12:45:46.866247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786173 ] 00:05:39.086 [2024-12-15 12:45:46.958650] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.086 [2024-12-15 12:45:46.958681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.345 [2024-12-15 12:45:47.007374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.345 [2024-12-15 12:45:47.010873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.345 [2024-12-15 12:45:47.010874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.913 [2024-12-15 12:45:47.725897] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786035 has claimed it. 00:05:39.913 request: 00:05:39.913 { 00:05:39.913 "method": "framework_enable_cpumask_locks", 00:05:39.913 "req_id": 1 00:05:39.913 } 00:05:39.913 Got JSON-RPC error response 00:05:39.913 response: 00:05:39.913 { 00:05:39.913 "code": -32603, 00:05:39.913 "message": "Failed to claim CPU core: 2" 00:05:39.913 } 00:05:39.913 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 786035 /var/tmp/spdk.sock 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786035 ']' 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.914 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 786173 /var/tmp/spdk2.sock 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786173 ']' 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.173 12:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.432 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.432 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.432 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.432 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.432 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.433 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.433 00:05:40.433 real 0m1.683s 00:05:40.433 user 0m0.846s 00:05:40.433 sys 0m0.130s 00:05:40.433 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.433 12:45:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.433 ************************************ 00:05:40.433 END TEST locking_overlapped_coremask_via_rpc 00:05:40.433 ************************************ 00:05:40.433 12:45:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.433 12:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786035 ]] 00:05:40.433 12:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786035 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786035 ']' 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786035 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786035 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786035' 00:05:40.433 killing process with pid 786035 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786035 00:05:40.433 12:45:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786035 00:05:40.692 12:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786173 ]] 00:05:40.692 12:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786173 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786173 ']' 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786173 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786173 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786173' 00:05:40.692 killing process with pid 786173 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786173 00:05:40.692 12:45:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786173 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786035 ]] 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786035 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786035 ']' 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786035 00:05:41.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786035) - No such process 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786035 is not found' 00:05:41.260 Process with pid 786035 is not found 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786173 ]] 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786173 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786173 ']' 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786173 00:05:41.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786173) - No such process 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786173 is not found' 00:05:41.260 Process with pid 786173 is not found 00:05:41.260 12:45:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.260 00:05:41.260 real 0m13.498s 00:05:41.260 user 0m23.919s 00:05:41.260 sys 0m4.837s 00:05:41.260 12:45:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.261 12:45:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.261 ************************************ 00:05:41.261 END TEST cpu_locks 00:05:41.261 ************************************ 00:05:41.261 00:05:41.261 real 0m38.134s 00:05:41.261 user 1m13.316s 00:05:41.261 sys 0m8.325s 00:05:41.261 12:45:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.261 12:45:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.261 ************************************ 00:05:41.261 END TEST event 00:05:41.261 ************************************ 00:05:41.261 12:45:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.261 12:45:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.261 12:45:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.261 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.261 ************************************ 00:05:41.261 START TEST thread 00:05:41.261 ************************************ 00:05:41.261 12:45:49 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.261 * Looking for test storage... 00:05:41.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:41.261 12:45:49 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.261 12:45:49 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.261 12:45:49 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.520 12:45:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.520 12:45:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.520 12:45:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.520 12:45:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.520 12:45:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.520 12:45:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.520 12:45:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.520 12:45:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.520 12:45:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.520 12:45:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.520 12:45:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.520 12:45:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.520 12:45:49 thread -- scripts/common.sh@345 -- # : 1 00:05:41.520 12:45:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.520 12:45:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.520 12:45:49 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.520 12:45:49 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.520 12:45:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.520 12:45:49 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.520 12:45:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.520 12:45:49 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.520 12:45:49 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.520 12:45:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.520 12:45:49 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.520 12:45:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.520 12:45:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.520 12:45:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.520 12:45:49 thread -- scripts/common.sh@368 -- # return 0 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.520 --rc genhtml_branch_coverage=1 00:05:41.520 --rc genhtml_function_coverage=1 00:05:41.520 --rc genhtml_legend=1 00:05:41.520 --rc geninfo_all_blocks=1 00:05:41.520 --rc geninfo_unexecuted_blocks=1 00:05:41.520 00:05:41.520 ' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.520 --rc genhtml_branch_coverage=1 00:05:41.520 --rc genhtml_function_coverage=1 00:05:41.520 --rc genhtml_legend=1 00:05:41.520 --rc geninfo_all_blocks=1 00:05:41.520 --rc geninfo_unexecuted_blocks=1 00:05:41.520 00:05:41.520 ' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.520 --rc genhtml_branch_coverage=1 00:05:41.520 --rc genhtml_function_coverage=1 00:05:41.520 --rc genhtml_legend=1 00:05:41.520 --rc geninfo_all_blocks=1 00:05:41.520 --rc geninfo_unexecuted_blocks=1 00:05:41.520 00:05:41.520 ' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.520 --rc genhtml_branch_coverage=1 00:05:41.520 --rc genhtml_function_coverage=1 00:05:41.520 --rc genhtml_legend=1 00:05:41.520 --rc geninfo_all_blocks=1 00:05:41.520 --rc geninfo_unexecuted_blocks=1 00:05:41.520 00:05:41.520 ' 00:05:41.520 12:45:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.520 12:45:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.520 ************************************ 00:05:41.520 START TEST thread_poller_perf 00:05:41.520 ************************************ 00:05:41.520 12:45:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.520 [2024-12-15 12:45:49.239641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:41.520 [2024-12-15 12:45:49.239709] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786600 ] 00:05:41.520 [2024-12-15 12:45:49.318675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.520 [2024-12-15 12:45:49.340752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.899 [2024-12-15T11:45:50.806Z] ====================================== 00:05:42.899 [2024-12-15T11:45:50.806Z] busy:2107598646 (cyc) 00:05:42.899 [2024-12-15T11:45:50.806Z] total_run_count: 415000 00:05:42.899 [2024-12-15T11:45:50.806Z] tsc_hz: 2100000000 (cyc) 00:05:42.899 [2024-12-15T11:45:50.806Z] ====================================== 00:05:42.899 [2024-12-15T11:45:50.806Z] poller_cost: 5078 (cyc), 2418 (nsec) 00:05:42.899 00:05:42.899 real 0m1.166s 00:05:42.899 user 0m1.089s 00:05:42.899 sys 0m0.072s 00:05:42.899 12:45:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.899 12:45:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.899 ************************************ 00:05:42.899 END TEST thread_poller_perf 00:05:42.899 ************************************ 00:05:42.899 12:45:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.899 12:45:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:42.899 12:45:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.899 12:45:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.899 ************************************ 00:05:42.899 START TEST thread_poller_perf 00:05:42.899 ************************************ 00:05:42.899 12:45:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.899 [2024-12-15 12:45:50.472147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:42.899 [2024-12-15 12:45:50.472213] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786841 ] 00:05:42.899 [2024-12-15 12:45:50.550153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.899 [2024-12-15 12:45:50.572118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.899 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.835 [2024-12-15T11:45:51.742Z] ====================================== 00:05:43.835 [2024-12-15T11:45:51.742Z] busy:2101401094 (cyc) 00:05:43.835 [2024-12-15T11:45:51.742Z] total_run_count: 5172000 00:05:43.835 [2024-12-15T11:45:51.742Z] tsc_hz: 2100000000 (cyc) 00:05:43.835 [2024-12-15T11:45:51.742Z] ====================================== 00:05:43.835 [2024-12-15T11:45:51.742Z] poller_cost: 406 (cyc), 193 (nsec) 00:05:43.835 00:05:43.835 real 0m1.151s 00:05:43.835 user 0m1.068s 00:05:43.835 sys 0m0.078s 00:05:43.835 12:45:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.835 12:45:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.835 ************************************ 00:05:43.835 END TEST thread_poller_perf 00:05:43.835 ************************************ 00:05:43.835 12:45:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:43.835 00:05:43.835 real 0m2.635s 00:05:43.835 user 0m2.324s 00:05:43.835 sys 0m0.324s 00:05:43.836 12:45:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.836 12:45:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 END TEST thread 00:05:43.836 ************************************ 00:05:43.836 12:45:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:43.836 12:45:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:43.836 12:45:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.836 12:45:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.836 12:45:51 -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 START TEST app_cmdline 00:05:43.836 ************************************ 00:05:43.836 12:45:51 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.095 * Looking for test storage... 00:05:44.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.095 12:45:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.095 --rc genhtml_branch_coverage=1 00:05:44.095 --rc genhtml_function_coverage=1 00:05:44.095 --rc genhtml_legend=1 00:05:44.095 --rc geninfo_all_blocks=1 00:05:44.095 --rc geninfo_unexecuted_blocks=1 00:05:44.095 00:05:44.095 ' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.095 --rc genhtml_branch_coverage=1 00:05:44.095 --rc genhtml_function_coverage=1 00:05:44.095 --rc genhtml_legend=1 00:05:44.095 --rc geninfo_all_blocks=1 00:05:44.095 --rc geninfo_unexecuted_blocks=1 00:05:44.095 00:05:44.095 ' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.095 --rc genhtml_branch_coverage=1 00:05:44.095 --rc genhtml_function_coverage=1 00:05:44.095 --rc genhtml_legend=1 00:05:44.095 --rc geninfo_all_blocks=1 00:05:44.095 --rc geninfo_unexecuted_blocks=1 00:05:44.095 00:05:44.095 ' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.095 --rc genhtml_branch_coverage=1 00:05:44.095 --rc genhtml_function_coverage=1 00:05:44.095 --rc genhtml_legend=1 00:05:44.095 --rc geninfo_all_blocks=1 00:05:44.095 --rc geninfo_unexecuted_blocks=1 00:05:44.095 00:05:44.095 ' 00:05:44.095 12:45:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.095 12:45:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=787129 00:05:44.095 12:45:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 787129 00:05:44.095 12:45:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 787129 ']' 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.095 12:45:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.095 [2024-12-15 12:45:51.938836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:44.095 [2024-12-15 12:45:51.938883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787129 ] 00:05:44.354 [2024-12-15 12:45:52.014610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.354 [2024-12-15 12:45:52.036807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.354 12:45:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.354 12:45:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:44.354 12:45:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:44.613 { 00:05:44.613 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:44.613 "fields": { 00:05:44.613 "major": 25, 00:05:44.613 "minor": 1, 00:05:44.613 "patch": 0, 00:05:44.613 "suffix": "-pre", 00:05:44.613 "commit": "e01cb43b8" 00:05:44.613 } 00:05:44.613 } 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.613 12:45:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:44.613 12:45:52 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.873 request: 00:05:44.873 { 00:05:44.873 "method": "env_dpdk_get_mem_stats", 00:05:44.873 "req_id": 1 00:05:44.873 } 00:05:44.873 Got JSON-RPC error response 00:05:44.873 response: 00:05:44.873 { 00:05:44.873 "code": -32601, 00:05:44.873 "message": "Method not found" 00:05:44.873 } 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.873 12:45:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 787129 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 787129 ']' 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 787129 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787129 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787129' 00:05:44.873 killing process with pid 787129 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 787129 00:05:44.873 12:45:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 787129 00:05:45.132 00:05:45.132 real 0m1.310s 00:05:45.132 user 0m1.534s 00:05:45.132 sys 0m0.445s 00:05:45.132 12:45:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.132 12:45:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.132 ************************************ 00:05:45.132 END TEST app_cmdline 00:05:45.132 ************************************ 00:05:45.392 12:45:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:45.392 12:45:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.392 12:45:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.392 12:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.392 ************************************ 00:05:45.392 START TEST version 00:05:45.392 ************************************ 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:45.392 * Looking for test storage... 00:05:45.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.392 12:45:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.392 12:45:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.392 12:45:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.392 12:45:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.392 12:45:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.392 12:45:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.392 12:45:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.392 12:45:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.392 12:45:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.392 12:45:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.392 12:45:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.392 12:45:53 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.392 12:45:53 version -- scripts/common.sh@345 -- # : 1 00:05:45.392 12:45:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.392 12:45:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.392 12:45:53 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.392 12:45:53 version -- scripts/common.sh@353 -- # local d=1 00:05:45.392 12:45:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.392 12:45:53 version -- scripts/common.sh@355 -- # echo 1 00:05:45.392 12:45:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.392 12:45:53 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.392 12:45:53 version -- scripts/common.sh@353 -- # local d=2 00:05:45.392 12:45:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.392 12:45:53 version -- scripts/common.sh@355 -- # echo 2 00:05:45.392 12:45:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.392 12:45:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.392 12:45:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.392 12:45:53 version -- scripts/common.sh@368 -- # return 0 00:05:45.392 12:45:53 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.393 12:45:53 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.393 --rc genhtml_branch_coverage=1 00:05:45.393 --rc genhtml_function_coverage=1 00:05:45.393 --rc genhtml_legend=1 00:05:45.393 --rc geninfo_all_blocks=1 00:05:45.393 --rc geninfo_unexecuted_blocks=1 00:05:45.393 00:05:45.393 ' 00:05:45.393 12:45:53 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.393 --rc genhtml_branch_coverage=1 00:05:45.393 --rc genhtml_function_coverage=1 00:05:45.393 --rc genhtml_legend=1 00:05:45.393 --rc geninfo_all_blocks=1 00:05:45.393 --rc geninfo_unexecuted_blocks=1 00:05:45.393 00:05:45.393 ' 00:05:45.393 12:45:53 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.393 --rc genhtml_branch_coverage=1 00:05:45.393 --rc genhtml_function_coverage=1 00:05:45.393 --rc genhtml_legend=1 00:05:45.393 --rc geninfo_all_blocks=1 00:05:45.393 --rc geninfo_unexecuted_blocks=1 00:05:45.393 00:05:45.393 ' 00:05:45.393 12:45:53 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.393 --rc genhtml_branch_coverage=1 00:05:45.393 --rc genhtml_function_coverage=1 00:05:45.393 --rc genhtml_legend=1 00:05:45.393 --rc geninfo_all_blocks=1 00:05:45.393 --rc geninfo_unexecuted_blocks=1 00:05:45.393 00:05:45.393 ' 00:05:45.393 12:45:53 version -- app/version.sh@17 -- # get_header_version major 00:05:45.393 12:45:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # cut -f2 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.393 12:45:53 version -- app/version.sh@17 -- # major=25 00:05:45.393 12:45:53 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.393 12:45:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # cut -f2 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.393 12:45:53 version -- app/version.sh@18 -- # minor=1 00:05:45.393 12:45:53 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.393 12:45:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # cut -f2 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.393 12:45:53 version -- app/version.sh@19 -- # patch=0 00:05:45.393 12:45:53 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.393 12:45:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # cut -f2 00:05:45.393 12:45:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.393 12:45:53 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.393 12:45:53 version -- app/version.sh@22 -- # version=25.1 00:05:45.393 12:45:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.393 12:45:53 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.393 12:45:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:45.652 12:45:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.652 12:45:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.652 12:45:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.652 00:05:45.652 real 0m0.246s 00:05:45.652 user 0m0.149s 00:05:45.652 sys 0m0.140s 00:05:45.652 12:45:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.652 12:45:53 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.652 ************************************ 00:05:45.652 END TEST version 00:05:45.652 ************************************ 00:05:45.652 12:45:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.652 12:45:53 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.652 12:45:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.652 12:45:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.652 12:45:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.652 12:45:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:45.652 12:45:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.652 12:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.652 12:45:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:45.652 12:45:53 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:45.652 12:45:53 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.653 12:45:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.653 12:45:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.653 12:45:53 -- common/autotest_common.sh@10 -- # set +x 00:05:45.653 ************************************ 00:05:45.653 START TEST nvmf_tcp 00:05:45.653 ************************************ 00:05:45.653 12:45:53 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.653 * Looking for test storage... 00:05:45.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.653 12:45:53 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.653 12:45:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.653 12:45:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.953 12:45:53 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.953 12:45:53 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.954 12:45:53 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.954 --rc genhtml_branch_coverage=1 00:05:45.954 --rc genhtml_function_coverage=1 00:05:45.954 --rc genhtml_legend=1 00:05:45.954 --rc geninfo_all_blocks=1 00:05:45.954 --rc geninfo_unexecuted_blocks=1 00:05:45.954 00:05:45.954 ' 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.954 --rc genhtml_branch_coverage=1 00:05:45.954 --rc genhtml_function_coverage=1 00:05:45.954 --rc genhtml_legend=1 00:05:45.954 --rc geninfo_all_blocks=1 00:05:45.954 --rc geninfo_unexecuted_blocks=1 00:05:45.954 00:05:45.954 ' 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.954 --rc genhtml_branch_coverage=1 00:05:45.954 --rc genhtml_function_coverage=1 00:05:45.954 --rc genhtml_legend=1 00:05:45.954 --rc geninfo_all_blocks=1 00:05:45.954 --rc geninfo_unexecuted_blocks=1 00:05:45.954 00:05:45.954 ' 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.954 --rc genhtml_branch_coverage=1 00:05:45.954 --rc genhtml_function_coverage=1 00:05:45.954 --rc genhtml_legend=1 00:05:45.954 --rc geninfo_all_blocks=1 00:05:45.954 --rc geninfo_unexecuted_blocks=1 00:05:45.954 00:05:45.954 ' 00:05:45.954 12:45:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:45.954 12:45:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:45.954 12:45:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.954 12:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.954 ************************************ 00:05:45.954 START TEST nvmf_target_core 00:05:45.954 ************************************ 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:45.954 * Looking for test storage... 00:05:45.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.954 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.266 --rc genhtml_branch_coverage=1 00:05:46.266 --rc genhtml_function_coverage=1 00:05:46.266 --rc genhtml_legend=1 00:05:46.266 --rc geninfo_all_blocks=1 00:05:46.266 --rc geninfo_unexecuted_blocks=1 00:05:46.266 00:05:46.266 ' 00:05:46.266 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.266 --rc genhtml_branch_coverage=1 00:05:46.266 --rc genhtml_function_coverage=1 00:05:46.266 --rc genhtml_legend=1 00:05:46.266 --rc geninfo_all_blocks=1 00:05:46.267 --rc geninfo_unexecuted_blocks=1 00:05:46.267 00:05:46.267 ' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.267 --rc genhtml_branch_coverage=1 00:05:46.267 --rc genhtml_function_coverage=1 00:05:46.267 --rc genhtml_legend=1 00:05:46.267 --rc geninfo_all_blocks=1 00:05:46.267 --rc geninfo_unexecuted_blocks=1 00:05:46.267 00:05:46.267 ' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.267 --rc genhtml_branch_coverage=1 00:05:46.267 --rc genhtml_function_coverage=1 00:05:46.267 --rc genhtml_legend=1 00:05:46.267 --rc geninfo_all_blocks=1 00:05:46.267 --rc geninfo_unexecuted_blocks=1 00:05:46.267 00:05:46.267 ' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.267 ************************************ 00:05:46.267 START TEST nvmf_abort 00:05:46.267 ************************************ 00:05:46.267 12:45:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.267 * Looking for test storage... 00:05:46.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:46.267 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.268 --rc genhtml_branch_coverage=1 00:05:46.268 --rc genhtml_function_coverage=1 00:05:46.268 --rc genhtml_legend=1 00:05:46.268 --rc geninfo_all_blocks=1 00:05:46.268 --rc geninfo_unexecuted_blocks=1 00:05:46.268 00:05:46.268 ' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.268 --rc genhtml_branch_coverage=1 00:05:46.268 --rc genhtml_function_coverage=1 00:05:46.268 --rc genhtml_legend=1 00:05:46.268 --rc geninfo_all_blocks=1 00:05:46.268 --rc geninfo_unexecuted_blocks=1 00:05:46.268 00:05:46.268 ' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.268 --rc genhtml_branch_coverage=1 00:05:46.268 --rc genhtml_function_coverage=1 00:05:46.268 --rc genhtml_legend=1 00:05:46.268 --rc geninfo_all_blocks=1 00:05:46.268 --rc geninfo_unexecuted_blocks=1 00:05:46.268 00:05:46.268 ' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.268 --rc genhtml_branch_coverage=1 00:05:46.268 --rc genhtml_function_coverage=1 00:05:46.268 --rc genhtml_legend=1 00:05:46.268 --rc geninfo_all_blocks=1 00:05:46.268 --rc geninfo_unexecuted_blocks=1 00:05:46.268 00:05:46.268 ' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:46.268 12:45:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:52.949 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:52.949 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:52.949 Found net devices under 0000:af:00.0: cvl_0_0 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:52.949 Found net devices under 0000:af:00.1: cvl_0_1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.949 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.950 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.950 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.950 12:45:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:05:52.950 00:05:52.950 --- 10.0.0.2 ping statistics --- 00:05:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.950 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:05:52.950 00:05:52.950 --- 10.0.0.1 ping statistics --- 00:05:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.950 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=790757 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 790757 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 790757 ']' 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 [2024-12-15 12:46:00.115874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:52.950 [2024-12-15 12:46:00.115924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.950 [2024-12-15 12:46:00.192928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.950 [2024-12-15 12:46:00.215965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.950 [2024-12-15 12:46:00.216001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.950 [2024-12-15 12:46:00.216009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.950 [2024-12-15 12:46:00.216016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.950 [2024-12-15 12:46:00.216020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.950 [2024-12-15 12:46:00.217277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.950 [2024-12-15 12:46:00.217404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.950 [2024-12-15 12:46:00.217405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 [2024-12-15 12:46:00.356518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 Malloc0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 Delay0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 [2024-12-15 12:46:00.420466] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.950 12:46:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:52.950 [2024-12-15 12:46:00.546509] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:54.857 [2024-12-15 12:46:02.612386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f0630 is same with the state(6) to be set 00:05:54.857 Initializing NVMe Controllers 00:05:54.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:54.857 controller IO queue size 128 less than required 00:05:54.857 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:54.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:54.857 Initialization complete. Launching workers. 00:05:54.857 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37520 00:05:54.857 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37581, failed to submit 62 00:05:54.857 success 37524, unsuccessful 57, failed 0 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:54.857 rmmod nvme_tcp 00:05:54.857 rmmod nvme_fabrics 00:05:54.857 rmmod nvme_keyring 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 790757 ']' 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 790757 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 790757 ']' 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 790757 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 790757 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 790757' 00:05:54.857 killing process with pid 790757 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 790757 00:05:54.857 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 790757 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.116 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.117 12:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.657 00:05:57.657 real 0m11.101s 00:05:57.657 user 0m11.583s 00:05:57.657 sys 0m5.344s 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:57.657 ************************************ 00:05:57.657 END TEST nvmf_abort 00:05:57.657 ************************************ 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.657 ************************************ 00:05:57.657 START TEST nvmf_ns_hotplug_stress 00:05:57.657 ************************************ 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.657 * Looking for test storage... 00:05:57.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.657 --rc genhtml_branch_coverage=1 00:05:57.657 --rc genhtml_function_coverage=1 00:05:57.657 --rc genhtml_legend=1 00:05:57.657 --rc geninfo_all_blocks=1 00:05:57.657 --rc geninfo_unexecuted_blocks=1 00:05:57.657 00:05:57.657 ' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.657 --rc genhtml_branch_coverage=1 00:05:57.657 --rc genhtml_function_coverage=1 00:05:57.657 --rc genhtml_legend=1 00:05:57.657 --rc geninfo_all_blocks=1 00:05:57.657 --rc geninfo_unexecuted_blocks=1 00:05:57.657 00:05:57.657 ' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.657 --rc genhtml_branch_coverage=1 00:05:57.657 --rc genhtml_function_coverage=1 00:05:57.657 --rc genhtml_legend=1 00:05:57.657 --rc geninfo_all_blocks=1 00:05:57.657 --rc geninfo_unexecuted_blocks=1 00:05:57.657 00:05:57.657 ' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.657 --rc genhtml_branch_coverage=1 00:05:57.657 --rc genhtml_function_coverage=1 00:05:57.657 --rc genhtml_legend=1 00:05:57.657 --rc geninfo_all_blocks=1 00:05:57.657 --rc geninfo_unexecuted_blocks=1 00:05:57.657 00:05:57.657 ' 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.657 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.658 12:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:04.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:04.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.231 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:04.232 Found net devices under 0000:af:00.0: cvl_0_0 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:04.232 Found net devices under 0000:af:00.1: cvl_0_1 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.232 12:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:06:04.232 00:06:04.232 --- 10.0.0.2 ping statistics --- 00:06:04.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.232 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:04.232 00:06:04.232 --- 10.0.0.1 ping statistics --- 00:06:04.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.232 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=794702 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 794702 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 794702 ']' 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 [2024-12-15 12:46:11.321402] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:04.232 [2024-12-15 12:46:11.321445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.232 [2024-12-15 12:46:11.400575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.232 [2024-12-15 12:46:11.422694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.232 [2024-12-15 12:46:11.422730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.232 [2024-12-15 12:46:11.422737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.232 [2024-12-15 12:46:11.422743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.232 [2024-12-15 12:46:11.422748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.232 [2024-12-15 12:46:11.424003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.232 [2024-12-15 12:46:11.424111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.232 [2024-12-15 12:46:11.424112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.232 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.233 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.233 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:04.233 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:04.233 [2024-12-15 12:46:11.732367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.233 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.233 12:46:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.233 [2024-12-15 12:46:12.125751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.492 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:04.492 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:04.751 Malloc0 00:06:04.751 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:05.010 Delay0 00:06:05.010 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.269 12:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:05.269 NULL1 00:06:05.527 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:05.527 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:05.528 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=795187 00:06:05.528 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:05.528 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.786 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.045 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:06.045 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:06.305 true 00:06:06.305 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:06.305 12:46:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.305 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.564 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:06.564 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:06.822 true 00:06:06.822 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:06.822 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.080 12:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.339 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:07.339 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:07.598 true 00:06:07.598 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:07.598 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.598 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.856 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:07.856 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:08.114 true 00:06:08.114 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:08.114 12:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.372 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.631 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:08.631 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:08.631 true 00:06:08.890 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:08.890 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.890 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.149 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:09.149 12:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:09.408 true 00:06:09.408 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:09.408 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.666 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.925 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:09.925 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:09.925 true 00:06:09.925 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:09.925 12:46:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.184 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.442 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:10.442 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:10.701 true 00:06:10.701 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:10.701 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.960 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.220 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:11.220 12:46:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:11.220 true 00:06:11.220 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:11.220 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.480 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.739 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:11.739 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:11.998 true 00:06:11.998 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:11.998 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.257 12:46:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.516 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:12.516 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:12.516 true 00:06:12.516 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:12.516 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.773 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.032 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:13.032 12:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:13.291 true 00:06:13.291 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:13.291 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.550 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.810 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:13.810 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:13.810 true 00:06:13.810 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:13.810 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.068 12:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.327 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:14.327 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:14.586 true 00:06:14.586 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:14.586 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.845 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.104 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:15.104 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:15.104 true 00:06:15.104 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:15.104 12:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.363 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.622 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:15.622 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:15.881 true 00:06:15.881 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:15.881 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.140 12:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.399 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:16.399 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:16.399 true 00:06:16.399 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:16.399 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.658 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.915 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:16.915 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:17.172 true 00:06:17.172 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:17.172 12:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.429 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.686 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:17.686 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:17.686 true 00:06:17.686 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:17.686 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.944 12:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.201 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:18.201 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:18.459 true 00:06:18.459 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:18.459 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.717 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.975 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:18.975 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:18.975 true 00:06:18.975 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:18.975 12:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.233 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.492 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:19.492 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:19.749 true 00:06:19.749 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:19.749 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.007 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.266 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:20.266 12:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:20.266 true 00:06:20.524 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:20.524 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.524 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.782 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:20.782 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:21.041 true 00:06:21.041 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:21.041 12:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.300 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.558 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:21.558 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:21.558 true 00:06:21.816 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:21.816 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.816 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.074 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:22.074 12:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:22.333 true 00:06:22.333 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:22.333 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.591 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.850 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:22.850 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:23.109 true 00:06:23.109 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:23.109 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.109 12:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.368 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:23.368 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:23.626 true 00:06:23.626 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:23.626 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.885 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.143 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:24.143 12:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:24.143 true 00:06:24.402 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:24.402 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.402 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.660 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:24.660 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:24.919 true 00:06:24.919 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:24.919 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.177 12:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.436 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:25.436 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:25.695 true 00:06:25.695 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:25.695 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.695 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.954 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:25.954 12:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:26.212 true 00:06:26.212 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:26.212 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.471 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.729 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:26.729 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:26.987 true 00:06:26.987 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:26.987 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.987 12:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.248 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:27.248 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:27.505 true 00:06:27.505 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:27.505 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.762 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.020 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:28.020 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:28.020 true 00:06:28.278 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:28.278 12:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.278 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.536 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:28.536 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:28.794 true 00:06:28.794 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:28.794 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.052 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.313 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:29.313 12:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:29.313 true 00:06:29.571 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:29.571 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.571 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.829 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:29.829 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:30.088 true 00:06:30.088 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:30.088 12:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.347 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.606 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:30.606 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:30.606 true 00:06:30.864 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:30.864 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.864 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.122 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:31.122 12:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:31.380 true 00:06:31.380 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:31.380 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.639 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.898 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:31.898 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:31.898 true 00:06:32.157 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:32.157 12:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.157 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.414 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:32.414 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:32.672 true 00:06:32.672 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:32.672 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.930 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.189 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:33.189 12:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:33.189 true 00:06:33.447 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:33.447 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.447 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.704 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:33.704 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:33.963 true 00:06:33.963 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:33.963 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.221 12:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.479 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:34.479 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:34.479 true 00:06:34.737 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:34.737 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.737 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.996 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:34.996 12:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:35.254 true 00:06:35.254 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:35.254 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.512 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.771 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:35.771 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:35.771 true 00:06:35.771 Initializing NVMe Controllers 00:06:35.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.771 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:35.771 Controller IO queue size 128, less than required. 00:06:35.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.771 WARNING: Some requested NVMe devices were skipped 00:06:35.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:35.771 Initialization complete. Launching workers. 00:06:35.771 ======================================================== 00:06:35.771 Latency(us) 00:06:35.771 Device Information : IOPS MiB/s Average min max 00:06:35.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27489.73 13.42 4656.13 2117.35 8451.39 00:06:35.771 ======================================================== 00:06:35.771 Total : 27489.73 13.42 4656.13 2117.35 8451.39 00:06:35.771 00:06:36.030 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795187 00:06:36.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (795187) - No such process 00:06:36.030 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 795187 00:06:36.030 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.030 12:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.288 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:36.289 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:36.289 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:36.289 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.289 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:36.547 null0 00:06:36.547 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.547 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.547 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:36.547 null1 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:36.806 null2 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.806 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:37.064 null3 00:06:37.064 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.064 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.064 12:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:37.323 null4 00:06:37.323 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.323 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.323 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:37.581 null5 00:06:37.581 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.581 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.581 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:37.581 null6 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:37.840 null7 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.840 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 800699 800700 800702 800704 800706 800708 800710 800712 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.841 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.100 12:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.359 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.617 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.618 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.877 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.136 12:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.395 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.654 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.913 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.913 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.913 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.913 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.914 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.173 12:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.432 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.691 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.951 12:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.210 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.469 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.729 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.988 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:42.247 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.248 rmmod nvme_tcp 00:06:42.248 rmmod nvme_fabrics 00:06:42.248 rmmod nvme_keyring 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 794702 ']' 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 794702 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 794702 ']' 00:06:42.248 12:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 794702 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794702 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794702' 00:06:42.248 killing process with pid 794702 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 794702 00:06:42.248 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 794702 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.507 12:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.413 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.413 00:06:44.413 real 0m47.198s 00:06:44.413 user 3m21.167s 00:06:44.413 sys 0m17.282s 00:06:44.413 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.413 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:44.413 ************************************ 00:06:44.413 END TEST nvmf_ns_hotplug_stress 00:06:44.413 ************************************ 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.673 ************************************ 00:06:44.673 START TEST nvmf_delete_subsystem 00:06:44.673 ************************************ 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:44.673 * Looking for test storage... 00:06:44.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.673 --rc genhtml_branch_coverage=1 00:06:44.673 --rc genhtml_function_coverage=1 00:06:44.673 --rc genhtml_legend=1 00:06:44.673 --rc geninfo_all_blocks=1 00:06:44.673 --rc geninfo_unexecuted_blocks=1 00:06:44.673 00:06:44.673 ' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.673 --rc genhtml_branch_coverage=1 00:06:44.673 --rc genhtml_function_coverage=1 00:06:44.673 --rc genhtml_legend=1 00:06:44.673 --rc geninfo_all_blocks=1 00:06:44.673 --rc geninfo_unexecuted_blocks=1 00:06:44.673 00:06:44.673 ' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.673 --rc genhtml_branch_coverage=1 00:06:44.673 --rc genhtml_function_coverage=1 00:06:44.673 --rc genhtml_legend=1 00:06:44.673 --rc geninfo_all_blocks=1 00:06:44.673 --rc geninfo_unexecuted_blocks=1 00:06:44.673 00:06:44.673 ' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.673 --rc genhtml_branch_coverage=1 00:06:44.673 --rc genhtml_function_coverage=1 00:06:44.673 --rc genhtml_legend=1 00:06:44.673 --rc geninfo_all_blocks=1 00:06:44.673 --rc geninfo_unexecuted_blocks=1 00:06:44.673 00:06:44.673 ' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.673 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.674 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.933 12:46:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.503 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:51.504 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:51.504 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:51.504 Found net devices under 0000:af:00.0: cvl_0_0 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:51.504 Found net devices under 0000:af:00.1: cvl_0_1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:06:51.504 00:06:51.504 --- 10.0.0.2 ping statistics --- 00:06:51.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.504 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:06:51.504 00:06:51.504 --- 10.0.0.1 ping statistics --- 00:06:51.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.504 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=805015 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 805015 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 805015 ']' 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.504 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 [2024-12-15 12:46:58.697914] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:51.505 [2024-12-15 12:46:58.697959] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.505 [2024-12-15 12:46:58.780001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.505 [2024-12-15 12:46:58.801822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.505 [2024-12-15 12:46:58.801863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.505 [2024-12-15 12:46:58.801870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.505 [2024-12-15 12:46:58.801876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.505 [2024-12-15 12:46:58.801882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.505 [2024-12-15 12:46:58.802970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.505 [2024-12-15 12:46:58.802972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 [2024-12-15 12:46:58.938798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 [2024-12-15 12:46:58.958980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 NULL1 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 Delay0 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=805169 00:06:51.505 12:46:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:51.505 12:46:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:51.505 [2024-12-15 12:46:59.079929] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:53.412 12:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.412 12:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.413 12:47:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 [2024-12-15 12:47:01.234792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfc5e0 is same with the state(6) to be set 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 [2024-12-15 12:47:01.236285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfc400 is same with the state(6) to be set 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 Read completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 Write completed with error (sct=0, sc=8) 00:06:53.413 starting I/O failed: -6 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Write completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 Read completed with error (sct=0, sc=8) 00:06:53.414 starting I/O failed: -6 00:06:53.414 starting I/O failed: -6 00:06:54.354 [2024-12-15 12:47:02.215882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfa190 is same with the state(6) to be set 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 [2024-12-15 12:47:02.238478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfbf70 is same with the state(6) to be set 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 [2024-12-15 12:47:02.238604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfc7c0 is same with the state(6) to be set 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Read completed with error (sct=0, sc=8) 00:06:54.354 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 [2024-12-15 12:47:02.241732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f356800d060 is same with the state(6) to be set 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Write completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 Read completed with error (sct=0, sc=8) 00:06:54.355 [2024-12-15 12:47:02.242426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f356800d6c0 is same with the state(6) to be set 00:06:54.355 Initializing NVMe Controllers 00:06:54.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.355 Controller IO queue size 128, less than required. 00:06:54.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:54.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:54.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:54.355 Initialization complete. Launching workers. 00:06:54.355 ======================================================== 00:06:54.355 Latency(us) 00:06:54.355 Device Information : IOPS MiB/s Average min max 00:06:54.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.33 0.08 903618.46 786.80 1006381.93 00:06:54.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.78 0.09 914642.79 306.51 1009613.78 00:06:54.355 ======================================================== 00:06:54.355 Total : 344.11 0.17 909346.01 306.51 1009613.78 00:06:54.355 00:06:54.355 [2024-12-15 12:47:02.243030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfa190 (9): Bad file descriptor 00:06:54.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:54.355 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.355 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:54.355 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805169 00:06:54.355 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805169 00:06:55.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (805169) - No such process 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 805169 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 805169 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 805169 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.063 [2024-12-15 12:47:02.771584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=805717 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:55.063 12:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.063 [2024-12-15 12:47:02.861565] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:55.672 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.672 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:55.672 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.930 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.930 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:55.930 12:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.497 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:56.497 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:56.497 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.063 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.063 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:57.063 12:47:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.632 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.632 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:57.632 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.199 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.199 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:58.199 12:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.199 Initializing NVMe Controllers 00:06:58.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.199 Controller IO queue size 128, less than required. 00:06:58.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.199 Initialization complete. Launching workers. 00:06:58.199 ======================================================== 00:06:58.199 Latency(us) 00:06:58.199 Device Information : IOPS MiB/s Average min max 00:06:58.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001980.82 1000122.95 1005833.49 00:06:58.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003565.20 1000135.31 1009312.93 00:06:58.199 ======================================================== 00:06:58.199 Total : 256.00 0.12 1002773.01 1000122.95 1009312.93 00:06:58.199 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 805717 00:06:58.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (805717) - No such process 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 805717 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.458 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.458 rmmod nvme_tcp 00:06:58.458 rmmod nvme_fabrics 00:06:58.717 rmmod nvme_keyring 00:06:58.717 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.717 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 805015 ']' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 805015 ']' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805015' 00:06:58.718 killing process with pid 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 805015 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.718 12:47:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.255 00:07:01.255 real 0m16.305s 00:07:01.255 user 0m29.330s 00:07:01.255 sys 0m5.507s 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.255 ************************************ 00:07:01.255 END TEST nvmf_delete_subsystem 00:07:01.255 ************************************ 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.255 ************************************ 00:07:01.255 START TEST nvmf_host_management 00:07:01.255 ************************************ 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:01.255 * Looking for test storage... 00:07:01.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.255 --rc genhtml_branch_coverage=1 00:07:01.255 --rc genhtml_function_coverage=1 00:07:01.255 --rc genhtml_legend=1 00:07:01.255 --rc geninfo_all_blocks=1 00:07:01.255 --rc geninfo_unexecuted_blocks=1 00:07:01.255 00:07:01.255 ' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.255 --rc genhtml_branch_coverage=1 00:07:01.255 --rc genhtml_function_coverage=1 00:07:01.255 --rc genhtml_legend=1 00:07:01.255 --rc geninfo_all_blocks=1 00:07:01.255 --rc geninfo_unexecuted_blocks=1 00:07:01.255 00:07:01.255 ' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.255 --rc genhtml_branch_coverage=1 00:07:01.255 --rc genhtml_function_coverage=1 00:07:01.255 --rc genhtml_legend=1 00:07:01.255 --rc geninfo_all_blocks=1 00:07:01.255 --rc geninfo_unexecuted_blocks=1 00:07:01.255 00:07:01.255 ' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.255 --rc genhtml_branch_coverage=1 00:07:01.255 --rc genhtml_function_coverage=1 00:07:01.255 --rc genhtml_legend=1 00:07:01.255 --rc geninfo_all_blocks=1 00:07:01.255 --rc geninfo_unexecuted_blocks=1 00:07:01.255 00:07:01.255 ' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.255 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.256 12:47:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:07.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:07.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.825 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:07.826 Found net devices under 0000:af:00.0: cvl_0_0 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:07.826 Found net devices under 0000:af:00.1: cvl_0_1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:07:07.826 00:07:07.826 --- 10.0.0.2 ping statistics --- 00:07:07.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.826 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:07.826 00:07:07.826 --- 10.0.0.1 ping statistics --- 00:07:07.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.826 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=809873 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 809873 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 809873 ']' 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.826 12:47:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.826 [2024-12-15 12:47:14.976228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:07.826 [2024-12-15 12:47:14.976276] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.826 [2024-12-15 12:47:15.052133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.826 [2024-12-15 12:47:15.075291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.826 [2024-12-15 12:47:15.075327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.826 [2024-12-15 12:47:15.075335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.826 [2024-12-15 12:47:15.075343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.826 [2024-12-15 12:47:15.075348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.826 [2024-12-15 12:47:15.076683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.826 [2024-12-15 12:47:15.076789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.826 [2024-12-15 12:47:15.076900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.826 [2024-12-15 12:47:15.076900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.826 [2024-12-15 12:47:15.216316] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.826 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 Malloc0 00:07:07.827 [2024-12-15 12:47:15.295889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=809987 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 809987 /var/tmp/bdevperf.sock 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 809987 ']' 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:07.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:07.827 { 00:07:07.827 "params": { 00:07:07.827 "name": "Nvme$subsystem", 00:07:07.827 "trtype": "$TEST_TRANSPORT", 00:07:07.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:07.827 "adrfam": "ipv4", 00:07:07.827 "trsvcid": "$NVMF_PORT", 00:07:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:07.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:07.827 "hdgst": ${hdgst:-false}, 00:07:07.827 "ddgst": ${ddgst:-false} 00:07:07.827 }, 00:07:07.827 "method": "bdev_nvme_attach_controller" 00:07:07.827 } 00:07:07.827 EOF 00:07:07.827 )") 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:07.827 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:07.827 "params": { 00:07:07.827 "name": "Nvme0", 00:07:07.827 "trtype": "tcp", 00:07:07.827 "traddr": "10.0.0.2", 00:07:07.827 "adrfam": "ipv4", 00:07:07.827 "trsvcid": "4420", 00:07:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:07.827 "hdgst": false, 00:07:07.827 "ddgst": false 00:07:07.827 }, 00:07:07.827 "method": "bdev_nvme_attach_controller" 00:07:07.827 }' 00:07:07.827 [2024-12-15 12:47:15.391629] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:07.827 [2024-12-15 12:47:15.391677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809987 ] 00:07:07.827 [2024-12-15 12:47:15.468425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.827 [2024-12-15 12:47:15.490755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.827 Running I/O for 10 seconds... 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=90 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 90 -ge 100 ']' 00:07:08.086 12:47:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.348 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.348 [2024-12-15 12:47:16.090660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.090987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.090995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.348 [2024-12-15 12:47:16.091591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.348 [2024-12-15 12:47:16.091599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.349 [2024-12-15 12:47:16.091605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.091613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.349 [2024-12-15 12:47:16.091620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.091627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.349 [2024-12-15 12:47:16.091634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.091642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.349 [2024-12-15 12:47:16.091649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.349 [2024-12-15 12:47:16.091666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.092619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:08.349 task offset: 100864 on job bdev=Nvme0n1 fails 00:07:08.349 00:07:08.349 Latency(us) 00:07:08.349 [2024-12-15T11:47:16.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.349 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:08.349 Verification LBA range: start 0x0 length 0x400 00:07:08.349 Nvme0n1 : 0.40 1928.93 120.56 160.74 0.00 29809.79 1482.36 26838.55 00:07:08.349 [2024-12-15T11:47:16.256Z] =================================================================================================================== 00:07:08.349 [2024-12-15T11:47:16.256Z] Total : 1928.93 120.56 160.74 0.00 29809.79 1482.36 26838.55 00:07:08.349 [2024-12-15 12:47:16.095040] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.349 [2024-12-15 12:47:16.095063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad490 (9): Bad file descriptor 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.349 [2024-12-15 12:47:16.098284] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:08.349 [2024-12-15 12:47:16.098413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:08.349 [2024-12-15 12:47:16.098435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.349 [2024-12-15 12:47:16.098450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:08.349 [2024-12-15 12:47:16.098457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:08.349 [2024-12-15 12:47:16.098464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:08.349 [2024-12-15 12:47:16.098471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdad490 00:07:08.349 [2024-12-15 12:47:16.098490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad490 (9): Bad file descriptor 00:07:08.349 [2024-12-15 12:47:16.098501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:08.349 [2024-12-15 12:47:16.098508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:08.349 [2024-12-15 12:47:16.098517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:08.349 [2024-12-15 12:47:16.098525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.349 12:47:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 809987 00:07:09.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (809987) - No such process 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:09.283 { 00:07:09.283 "params": { 00:07:09.283 "name": "Nvme$subsystem", 00:07:09.283 "trtype": "$TEST_TRANSPORT", 00:07:09.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.283 "adrfam": "ipv4", 00:07:09.283 "trsvcid": "$NVMF_PORT", 00:07:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.283 "hdgst": ${hdgst:-false}, 00:07:09.283 "ddgst": ${ddgst:-false} 00:07:09.283 }, 00:07:09.283 "method": "bdev_nvme_attach_controller" 00:07:09.283 } 00:07:09.283 EOF 00:07:09.283 )") 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:09.283 12:47:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:09.283 "params": { 00:07:09.283 "name": "Nvme0", 00:07:09.283 "trtype": "tcp", 00:07:09.283 "traddr": "10.0.0.2", 00:07:09.283 "adrfam": "ipv4", 00:07:09.283 "trsvcid": "4420", 00:07:09.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.283 "hdgst": false, 00:07:09.283 "ddgst": false 00:07:09.283 }, 00:07:09.283 "method": "bdev_nvme_attach_controller" 00:07:09.283 }' 00:07:09.283 [2024-12-15 12:47:17.163575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:09.283 [2024-12-15 12:47:17.163624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810373 ] 00:07:09.541 [2024-12-15 12:47:17.237345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.541 [2024-12-15 12:47:17.258258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.541 Running I/O for 1 seconds... 00:07:10.918 2048.00 IOPS, 128.00 MiB/s 00:07:10.918 Latency(us) 00:07:10.918 [2024-12-15T11:47:18.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.918 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.918 Verification LBA range: start 0x0 length 0x400 00:07:10.918 Nvme0n1 : 1.02 2068.56 129.29 0.00 0.00 30459.06 4556.31 26838.55 00:07:10.918 [2024-12-15T11:47:18.825Z] =================================================================================================================== 00:07:10.918 [2024-12-15T11:47:18.825Z] Total : 2068.56 129.29 0.00 0.00 30459.06 4556.31 26838.55 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.918 rmmod nvme_tcp 00:07:10.918 rmmod nvme_fabrics 00:07:10.918 rmmod nvme_keyring 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 809873 ']' 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 809873 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 809873 ']' 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 809873 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 809873 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 809873' 00:07:10.918 killing process with pid 809873 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 809873 00:07:10.918 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 809873 00:07:11.178 [2024-12-15 12:47:18.891516] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.178 12:47:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.085 12:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.085 12:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.085 00:07:13.085 real 0m12.240s 00:07:13.085 user 0m19.139s 00:07:13.085 sys 0m5.525s 00:07:13.085 12:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.085 12:47:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.085 ************************************ 00:07:13.085 END TEST nvmf_host_management 00:07:13.085 ************************************ 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.344 ************************************ 00:07:13.344 START TEST nvmf_lvol 00:07:13.344 ************************************ 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.344 * Looking for test storage... 00:07:13.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.344 --rc genhtml_branch_coverage=1 00:07:13.344 --rc genhtml_function_coverage=1 00:07:13.344 --rc genhtml_legend=1 00:07:13.344 --rc geninfo_all_blocks=1 00:07:13.344 --rc geninfo_unexecuted_blocks=1 00:07:13.344 00:07:13.344 ' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.344 --rc genhtml_branch_coverage=1 00:07:13.344 --rc genhtml_function_coverage=1 00:07:13.344 --rc genhtml_legend=1 00:07:13.344 --rc geninfo_all_blocks=1 00:07:13.344 --rc geninfo_unexecuted_blocks=1 00:07:13.344 00:07:13.344 ' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.344 --rc genhtml_branch_coverage=1 00:07:13.344 --rc genhtml_function_coverage=1 00:07:13.344 --rc genhtml_legend=1 00:07:13.344 --rc geninfo_all_blocks=1 00:07:13.344 --rc geninfo_unexecuted_blocks=1 00:07:13.344 00:07:13.344 ' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.344 --rc genhtml_branch_coverage=1 00:07:13.344 --rc genhtml_function_coverage=1 00:07:13.344 --rc genhtml_legend=1 00:07:13.344 --rc geninfo_all_blocks=1 00:07:13.344 --rc geninfo_unexecuted_blocks=1 00:07:13.344 00:07:13.344 ' 00:07:13.344 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.345 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.604 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.605 12:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:20.179 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:20.179 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:20.179 Found net devices under 0000:af:00.0: cvl_0_0 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:20.179 Found net devices under 0000:af:00.1: cvl_0_1 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.179 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.180 12:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:07:20.180 00:07:20.180 --- 10.0.0.2 ping statistics --- 00:07:20.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.180 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:20.180 00:07:20.180 --- 10.0.0.1 ping statistics --- 00:07:20.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.180 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=814087 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 814087 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 814087 ']' 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.180 [2024-12-15 12:47:27.332901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:20.180 [2024-12-15 12:47:27.332953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.180 [2024-12-15 12:47:27.411686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.180 [2024-12-15 12:47:27.434382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.180 [2024-12-15 12:47:27.434423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.180 [2024-12-15 12:47:27.434431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.180 [2024-12-15 12:47:27.434438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.180 [2024-12-15 12:47:27.434443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.180 [2024-12-15 12:47:27.435749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.180 [2024-12-15 12:47:27.435786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.180 [2024-12-15 12:47:27.435789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:20.180 [2024-12-15 12:47:27.743705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.180 12:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:20.180 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:20.180 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:20.439 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:20.439 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:20.699 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:20.959 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=44c7779b-cd1c-44b0-9506-35bb4e58655c 00:07:20.959 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 44c7779b-cd1c-44b0-9506-35bb4e58655c lvol 20 00:07:20.959 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ea878845-cb04-4eee-b639-115d4512467e 00:07:20.959 12:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.217 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea878845-cb04-4eee-b639-115d4512467e 00:07:21.476 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:21.734 [2024-12-15 12:47:29.445629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.734 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.993 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=814569 00:07:21.993 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:21.993 12:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:22.928 12:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ea878845-cb04-4eee-b639-115d4512467e MY_SNAPSHOT 00:07:23.187 12:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6f776daa-2b75-4d64-92f4-c1f41b12e950 00:07:23.187 12:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ea878845-cb04-4eee-b639-115d4512467e 30 00:07:23.444 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6f776daa-2b75-4d64-92f4-c1f41b12e950 MY_CLONE 00:07:23.702 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ee16fe9-7395-489c-bc1c-317455de9d14 00:07:23.702 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4ee16fe9-7395-489c-bc1c-317455de9d14 00:07:24.269 12:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 814569 00:07:32.399 Initializing NVMe Controllers 00:07:32.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:32.399 Controller IO queue size 128, less than required. 00:07:32.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:32.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:32.399 Initialization complete. Launching workers. 00:07:32.399 ======================================================== 00:07:32.399 Latency(us) 00:07:32.399 Device Information : IOPS MiB/s Average min max 00:07:32.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12044.40 47.05 10629.22 414.46 59286.91 00:07:32.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11895.40 46.47 10764.84 2960.62 59588.91 00:07:32.399 ======================================================== 00:07:32.399 Total : 23939.80 93.51 10696.61 414.46 59588.91 00:07:32.399 00:07:32.399 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.399 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea878845-cb04-4eee-b639-115d4512467e 00:07:32.657 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44c7779b-cd1c-44b0-9506-35bb4e58655c 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.916 rmmod nvme_tcp 00:07:32.916 rmmod nvme_fabrics 00:07:32.916 rmmod nvme_keyring 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 814087 ']' 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 814087 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 814087 ']' 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 814087 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814087 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814087' 00:07:32.916 killing process with pid 814087 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 814087 00:07:32.916 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 814087 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.176 12:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.714 00:07:35.714 real 0m21.994s 00:07:35.714 user 1m3.414s 00:07:35.714 sys 0m7.517s 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.714 ************************************ 00:07:35.714 END TEST nvmf_lvol 00:07:35.714 ************************************ 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.714 ************************************ 00:07:35.714 START TEST nvmf_lvs_grow 00:07:35.714 ************************************ 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.714 * Looking for test storage... 00:07:35.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.714 --rc genhtml_branch_coverage=1 00:07:35.714 --rc genhtml_function_coverage=1 00:07:35.714 --rc genhtml_legend=1 00:07:35.714 --rc geninfo_all_blocks=1 00:07:35.714 --rc geninfo_unexecuted_blocks=1 00:07:35.714 00:07:35.714 ' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.714 --rc genhtml_branch_coverage=1 00:07:35.714 --rc genhtml_function_coverage=1 00:07:35.714 --rc genhtml_legend=1 00:07:35.714 --rc geninfo_all_blocks=1 00:07:35.714 --rc geninfo_unexecuted_blocks=1 00:07:35.714 00:07:35.714 ' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.714 --rc genhtml_branch_coverage=1 00:07:35.714 --rc genhtml_function_coverage=1 00:07:35.714 --rc genhtml_legend=1 00:07:35.714 --rc geninfo_all_blocks=1 00:07:35.714 --rc geninfo_unexecuted_blocks=1 00:07:35.714 00:07:35.714 ' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.714 --rc genhtml_branch_coverage=1 00:07:35.714 --rc genhtml_function_coverage=1 00:07:35.714 --rc genhtml_legend=1 00:07:35.714 --rc geninfo_all_blocks=1 00:07:35.714 --rc geninfo_unexecuted_blocks=1 00:07:35.714 00:07:35.714 ' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.714 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.715 12:47:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.284 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.284 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.284 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.284 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:42.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:42.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:42.285 Found net devices under 0000:af:00.0: cvl_0_0 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:42.285 Found net devices under 0000:af:00.1: cvl_0_1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:07:42.285 00:07:42.285 --- 10.0.0.2 ping statistics --- 00:07:42.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.285 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:42.285 00:07:42.285 --- 10.0.0.1 ping statistics --- 00:07:42.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.285 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.285 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=819849 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 819849 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 819849 ']' 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.286 [2024-12-15 12:47:49.379381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:42.286 [2024-12-15 12:47:49.379424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.286 [2024-12-15 12:47:49.457207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.286 [2024-12-15 12:47:49.477988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.286 [2024-12-15 12:47:49.478024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.286 [2024-12-15 12:47:49.478031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.286 [2024-12-15 12:47:49.478037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.286 [2024-12-15 12:47:49.478042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.286 [2024-12-15 12:47:49.478540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.286 [2024-12-15 12:47:49.790171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.286 ************************************ 00:07:42.286 START TEST lvs_grow_clean 00:07:42.286 ************************************ 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.286 12:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.286 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.286 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.545 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:42.545 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:42.545 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 29b7afca-5d80-4423-9804-a4034b9cfa03 lvol 150 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7290aad8-1798-4bc9-9602-5c4e04a61dde 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.804 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.063 [2024-12-15 12:47:50.820525] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.063 [2024-12-15 12:47:50.820573] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.063 true 00:07:43.063 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:43.063 12:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.322 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.322 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.322 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7290aad8-1798-4bc9-9602-5c4e04a61dde 00:07:43.581 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.840 [2024-12-15 12:47:51.558724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.840 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=820333 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 820333 /var/tmp/bdevperf.sock 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 820333 ']' 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:44.100 [2024-12-15 12:47:51.800521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:44.100 [2024-12-15 12:47:51.800567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820333 ] 00:07:44.100 [2024-12-15 12:47:51.873733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.100 [2024-12-15 12:47:51.895348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:44.100 12:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.668 Nvme0n1 00:07:44.668 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.927 [ 00:07:44.927 { 00:07:44.927 "name": "Nvme0n1", 00:07:44.927 "aliases": [ 00:07:44.927 "7290aad8-1798-4bc9-9602-5c4e04a61dde" 00:07:44.927 ], 00:07:44.927 "product_name": "NVMe disk", 00:07:44.927 "block_size": 4096, 00:07:44.927 "num_blocks": 38912, 00:07:44.927 "uuid": "7290aad8-1798-4bc9-9602-5c4e04a61dde", 00:07:44.927 "numa_id": 1, 00:07:44.927 "assigned_rate_limits": { 00:07:44.927 "rw_ios_per_sec": 0, 00:07:44.927 "rw_mbytes_per_sec": 0, 00:07:44.927 "r_mbytes_per_sec": 0, 00:07:44.927 "w_mbytes_per_sec": 0 00:07:44.927 }, 00:07:44.927 "claimed": false, 00:07:44.927 "zoned": false, 00:07:44.927 "supported_io_types": { 00:07:44.927 "read": true, 00:07:44.927 "write": true, 00:07:44.927 "unmap": true, 00:07:44.927 "flush": true, 00:07:44.927 "reset": true, 00:07:44.927 "nvme_admin": true, 00:07:44.927 "nvme_io": true, 00:07:44.927 "nvme_io_md": false, 00:07:44.927 "write_zeroes": true, 00:07:44.927 "zcopy": false, 00:07:44.927 "get_zone_info": false, 00:07:44.927 "zone_management": false, 00:07:44.927 "zone_append": false, 00:07:44.927 "compare": true, 00:07:44.927 "compare_and_write": true, 00:07:44.927 "abort": true, 00:07:44.927 "seek_hole": false, 00:07:44.927 "seek_data": false, 00:07:44.927 "copy": true, 00:07:44.927 "nvme_iov_md": false 00:07:44.927 }, 00:07:44.927 "memory_domains": [ 00:07:44.927 { 00:07:44.927 "dma_device_id": "system", 00:07:44.927 "dma_device_type": 1 00:07:44.927 } 00:07:44.927 ], 00:07:44.927 "driver_specific": { 00:07:44.927 "nvme": [ 00:07:44.927 { 00:07:44.927 "trid": { 00:07:44.927 "trtype": "TCP", 00:07:44.927 "adrfam": "IPv4", 00:07:44.927 "traddr": "10.0.0.2", 00:07:44.927 "trsvcid": "4420", 00:07:44.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.927 }, 00:07:44.927 "ctrlr_data": { 00:07:44.927 "cntlid": 1, 00:07:44.927 "vendor_id": "0x8086", 00:07:44.927 "model_number": "SPDK bdev Controller", 00:07:44.927 "serial_number": "SPDK0", 00:07:44.927 "firmware_revision": "25.01", 00:07:44.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.927 "oacs": { 00:07:44.927 "security": 0, 00:07:44.927 "format": 0, 00:07:44.927 "firmware": 0, 00:07:44.928 "ns_manage": 0 00:07:44.928 }, 00:07:44.928 "multi_ctrlr": true, 00:07:44.928 "ana_reporting": false 00:07:44.928 }, 00:07:44.928 "vs": { 00:07:44.928 "nvme_version": "1.3" 00:07:44.928 }, 00:07:44.928 "ns_data": { 00:07:44.928 "id": 1, 00:07:44.928 "can_share": true 00:07:44.928 } 00:07:44.928 } 00:07:44.928 ], 00:07:44.928 "mp_policy": "active_passive" 00:07:44.928 } 00:07:44.928 } 00:07:44.928 ] 00:07:44.928 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=820552 00:07:44.928 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:44.928 12:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.928 Running I/O for 10 seconds... 00:07:45.865 Latency(us) 00:07:45.865 [2024-12-15T11:47:53.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.865 Nvme0n1 : 1.00 23717.00 92.64 0.00 0.00 0.00 0.00 0.00 00:07:45.865 [2024-12-15T11:47:53.772Z] =================================================================================================================== 00:07:45.865 [2024-12-15T11:47:53.772Z] Total : 23717.00 92.64 0.00 0.00 0.00 0.00 0.00 00:07:45.865 00:07:46.801 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:47.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.059 Nvme0n1 : 2.00 23860.00 93.20 0.00 0.00 0.00 0.00 0.00 00:07:47.059 [2024-12-15T11:47:54.966Z] =================================================================================================================== 00:07:47.059 [2024-12-15T11:47:54.967Z] Total : 23860.00 93.20 0.00 0.00 0.00 0.00 0.00 00:07:47.060 00:07:47.060 true 00:07:47.060 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:47.060 12:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:47.321 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.321 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.321 12:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 820552 00:07:47.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.889 Nvme0n1 : 3.00 23845.67 93.15 0.00 0.00 0.00 0.00 0.00 00:07:47.889 [2024-12-15T11:47:55.796Z] =================================================================================================================== 00:07:47.889 [2024-12-15T11:47:55.796Z] Total : 23845.67 93.15 0.00 0.00 0.00 0.00 0.00 00:07:47.889 00:07:49.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.267 Nvme0n1 : 4.00 23925.00 93.46 0.00 0.00 0.00 0.00 0.00 00:07:49.267 [2024-12-15T11:47:57.174Z] =================================================================================================================== 00:07:49.267 [2024-12-15T11:47:57.174Z] Total : 23925.00 93.46 0.00 0.00 0.00 0.00 0.00 00:07:49.267 00:07:50.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.205 Nvme0n1 : 5.00 23980.20 93.67 0.00 0.00 0.00 0.00 0.00 00:07:50.205 [2024-12-15T11:47:58.112Z] =================================================================================================================== 00:07:50.205 [2024-12-15T11:47:58.112Z] Total : 23980.20 93.67 0.00 0.00 0.00 0.00 0.00 00:07:50.205 00:07:51.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.143 Nvme0n1 : 6.00 24012.50 93.80 0.00 0.00 0.00 0.00 0.00 00:07:51.143 [2024-12-15T11:47:59.050Z] =================================================================================================================== 00:07:51.143 [2024-12-15T11:47:59.050Z] Total : 24012.50 93.80 0.00 0.00 0.00 0.00 0.00 00:07:51.143 00:07:52.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.081 Nvme0n1 : 7.00 24048.43 93.94 0.00 0.00 0.00 0.00 0.00 00:07:52.081 [2024-12-15T11:47:59.988Z] =================================================================================================================== 00:07:52.081 [2024-12-15T11:47:59.988Z] Total : 24048.43 93.94 0.00 0.00 0.00 0.00 0.00 00:07:52.081 00:07:53.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.018 Nvme0n1 : 8.00 24019.75 93.83 0.00 0.00 0.00 0.00 0.00 00:07:53.018 [2024-12-15T11:48:00.925Z] =================================================================================================================== 00:07:53.018 [2024-12-15T11:48:00.925Z] Total : 24019.75 93.83 0.00 0.00 0.00 0.00 0.00 00:07:53.018 00:07:53.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.955 Nvme0n1 : 9.00 24039.89 93.91 0.00 0.00 0.00 0.00 0.00 00:07:53.955 [2024-12-15T11:48:01.862Z] =================================================================================================================== 00:07:53.955 [2024-12-15T11:48:01.862Z] Total : 24039.89 93.91 0.00 0.00 0.00 0.00 0.00 00:07:53.955 00:07:54.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.892 Nvme0n1 : 10.00 24058.60 93.98 0.00 0.00 0.00 0.00 0.00 00:07:54.892 [2024-12-15T11:48:02.799Z] =================================================================================================================== 00:07:54.892 [2024-12-15T11:48:02.799Z] Total : 24058.60 93.98 0.00 0.00 0.00 0.00 0.00 00:07:54.892 00:07:54.892 00:07:54.892 Latency(us) 00:07:54.892 [2024-12-15T11:48:02.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.892 Nvme0n1 : 10.00 24060.64 93.99 0.00 0.00 5316.71 1435.55 10548.18 00:07:54.892 [2024-12-15T11:48:02.799Z] =================================================================================================================== 00:07:54.892 [2024-12-15T11:48:02.800Z] Total : 24060.64 93.99 0.00 0.00 5316.71 1435.55 10548.18 00:07:54.893 { 00:07:54.893 "results": [ 00:07:54.893 { 00:07:54.893 "job": "Nvme0n1", 00:07:54.893 "core_mask": "0x2", 00:07:54.893 "workload": "randwrite", 00:07:54.893 "status": "finished", 00:07:54.893 "queue_depth": 128, 00:07:54.893 "io_size": 4096, 00:07:54.893 "runtime": 10.004471, 00:07:54.893 "iops": 24060.64248674418, 00:07:54.893 "mibps": 93.98688471384445, 00:07:54.893 "io_failed": 0, 00:07:54.893 "io_timeout": 0, 00:07:54.893 "avg_latency_us": 5316.7095355444535, 00:07:54.893 "min_latency_us": 1435.5504761904763, 00:07:54.893 "max_latency_us": 10548.175238095238 00:07:54.893 } 00:07:54.893 ], 00:07:54.893 "core_count": 1 00:07:54.893 } 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 820333 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 820333 ']' 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 820333 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.893 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820333 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820333' 00:07:55.152 killing process with pid 820333 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 820333 00:07:55.152 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.152 00:07:55.152 Latency(us) 00:07:55.152 [2024-12-15T11:48:03.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.152 [2024-12-15T11:48:03.059Z] =================================================================================================================== 00:07:55.152 [2024-12-15T11:48:03.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 820333 00:07:55.152 12:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.411 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.670 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:55.670 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.929 [2024-12-15 12:48:03.781345] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:55.929 12:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:56.188 request: 00:07:56.188 { 00:07:56.188 "uuid": "29b7afca-5d80-4423-9804-a4034b9cfa03", 00:07:56.188 "method": "bdev_lvol_get_lvstores", 00:07:56.188 "req_id": 1 00:07:56.188 } 00:07:56.188 Got JSON-RPC error response 00:07:56.188 response: 00:07:56.188 { 00:07:56.188 "code": -19, 00:07:56.188 "message": "No such device" 00:07:56.188 } 00:07:56.188 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:56.188 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.188 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.188 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.188 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.447 aio_bdev 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7290aad8-1798-4bc9-9602-5c4e04a61dde 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7290aad8-1798-4bc9-9602-5c4e04a61dde 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.447 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:56.706 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7290aad8-1798-4bc9-9602-5c4e04a61dde -t 2000 00:07:56.706 [ 00:07:56.706 { 00:07:56.706 "name": "7290aad8-1798-4bc9-9602-5c4e04a61dde", 00:07:56.706 "aliases": [ 00:07:56.706 "lvs/lvol" 00:07:56.706 ], 00:07:56.706 "product_name": "Logical Volume", 00:07:56.706 "block_size": 4096, 00:07:56.706 "num_blocks": 38912, 00:07:56.706 "uuid": "7290aad8-1798-4bc9-9602-5c4e04a61dde", 00:07:56.706 "assigned_rate_limits": { 00:07:56.706 "rw_ios_per_sec": 0, 00:07:56.706 "rw_mbytes_per_sec": 0, 00:07:56.706 "r_mbytes_per_sec": 0, 00:07:56.707 "w_mbytes_per_sec": 0 00:07:56.707 }, 00:07:56.707 "claimed": false, 00:07:56.707 "zoned": false, 00:07:56.707 "supported_io_types": { 00:07:56.707 "read": true, 00:07:56.707 "write": true, 00:07:56.707 "unmap": true, 00:07:56.707 "flush": false, 00:07:56.707 "reset": true, 00:07:56.707 "nvme_admin": false, 00:07:56.707 "nvme_io": false, 00:07:56.707 "nvme_io_md": false, 00:07:56.707 "write_zeroes": true, 00:07:56.707 "zcopy": false, 00:07:56.707 "get_zone_info": false, 00:07:56.707 "zone_management": false, 00:07:56.707 "zone_append": false, 00:07:56.707 "compare": false, 00:07:56.707 "compare_and_write": false, 00:07:56.707 "abort": false, 00:07:56.707 "seek_hole": true, 00:07:56.707 "seek_data": true, 00:07:56.707 "copy": false, 00:07:56.707 "nvme_iov_md": false 00:07:56.707 }, 00:07:56.707 "driver_specific": { 00:07:56.707 "lvol": { 00:07:56.707 "lvol_store_uuid": "29b7afca-5d80-4423-9804-a4034b9cfa03", 00:07:56.707 "base_bdev": "aio_bdev", 00:07:56.707 "thin_provision": false, 00:07:56.707 "num_allocated_clusters": 38, 00:07:56.707 "snapshot": false, 00:07:56.707 "clone": false, 00:07:56.707 "esnap_clone": false 00:07:56.707 } 00:07:56.707 } 00:07:56.707 } 00:07:56.707 ] 00:07:56.707 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:56.707 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:56.707 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:56.966 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:56.966 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:56.966 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:57.224 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:57.224 12:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7290aad8-1798-4bc9-9602-5c4e04a61dde 00:07:57.482 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29b7afca-5d80-4423-9804-a4034b9cfa03 00:07:57.482 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.741 00:07:57.741 real 0m15.691s 00:07:57.741 user 0m15.238s 00:07:57.741 sys 0m1.509s 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 END TEST lvs_grow_clean 00:07:57.741 ************************************ 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 START TEST lvs_grow_dirty 00:07:57.741 ************************************ 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.741 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.000 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:58.000 12:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:58.259 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=67951733-4d18-49e7-99a0-e6603b8c4903 00:07:58.259 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:07:58.259 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 67951733-4d18-49e7-99a0-e6603b8c4903 lvol 150 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0efc11f5-8436-49e8-b72d-b1b7416036b1 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.517 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:58.776 [2024-12-15 12:48:06.594714] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:58.776 [2024-12-15 12:48:06.594768] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:58.776 true 00:07:58.776 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:07:58.776 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:59.035 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:59.035 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:59.294 12:48:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0efc11f5-8436-49e8-b72d-b1b7416036b1 00:07:59.294 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:59.553 [2024-12-15 12:48:07.324893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.553 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=823576 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 823576 /var/tmp/bdevperf.sock 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 823576 ']' 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:59.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.812 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:59.812 [2024-12-15 12:48:07.568034] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:59.812 [2024-12-15 12:48:07.568090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823576 ] 00:07:59.812 [2024-12-15 12:48:07.640223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.812 [2024-12-15 12:48:07.662350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.071 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.071 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:00.071 12:48:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:00.331 Nvme0n1 00:08:00.331 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:00.331 [ 00:08:00.331 { 00:08:00.331 "name": "Nvme0n1", 00:08:00.331 "aliases": [ 00:08:00.331 "0efc11f5-8436-49e8-b72d-b1b7416036b1" 00:08:00.331 ], 00:08:00.331 "product_name": "NVMe disk", 00:08:00.331 "block_size": 4096, 00:08:00.331 "num_blocks": 38912, 00:08:00.331 "uuid": "0efc11f5-8436-49e8-b72d-b1b7416036b1", 00:08:00.331 "numa_id": 1, 00:08:00.331 "assigned_rate_limits": { 00:08:00.331 "rw_ios_per_sec": 0, 00:08:00.331 "rw_mbytes_per_sec": 0, 00:08:00.331 "r_mbytes_per_sec": 0, 00:08:00.331 "w_mbytes_per_sec": 0 00:08:00.331 }, 00:08:00.331 "claimed": false, 00:08:00.331 "zoned": false, 00:08:00.331 "supported_io_types": { 00:08:00.331 "read": true, 00:08:00.331 "write": true, 00:08:00.331 "unmap": true, 00:08:00.331 "flush": true, 00:08:00.331 "reset": true, 00:08:00.331 "nvme_admin": true, 00:08:00.331 "nvme_io": true, 00:08:00.331 "nvme_io_md": false, 00:08:00.331 "write_zeroes": true, 00:08:00.331 "zcopy": false, 00:08:00.331 "get_zone_info": false, 00:08:00.331 "zone_management": false, 00:08:00.331 "zone_append": false, 00:08:00.331 "compare": true, 00:08:00.331 "compare_and_write": true, 00:08:00.331 "abort": true, 00:08:00.331 "seek_hole": false, 00:08:00.331 "seek_data": false, 00:08:00.331 "copy": true, 00:08:00.331 "nvme_iov_md": false 00:08:00.331 }, 00:08:00.331 "memory_domains": [ 00:08:00.331 { 00:08:00.331 "dma_device_id": "system", 00:08:00.331 "dma_device_type": 1 00:08:00.331 } 00:08:00.331 ], 00:08:00.331 "driver_specific": { 00:08:00.331 "nvme": [ 00:08:00.331 { 00:08:00.331 "trid": { 00:08:00.331 "trtype": "TCP", 00:08:00.331 "adrfam": "IPv4", 00:08:00.331 "traddr": "10.0.0.2", 00:08:00.331 "trsvcid": "4420", 00:08:00.331 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:00.331 }, 00:08:00.331 "ctrlr_data": { 00:08:00.331 "cntlid": 1, 00:08:00.331 "vendor_id": "0x8086", 00:08:00.331 "model_number": "SPDK bdev Controller", 00:08:00.331 "serial_number": "SPDK0", 00:08:00.331 "firmware_revision": "25.01", 00:08:00.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.331 "oacs": { 00:08:00.331 "security": 0, 00:08:00.331 "format": 0, 00:08:00.331 "firmware": 0, 00:08:00.331 "ns_manage": 0 00:08:00.331 }, 00:08:00.331 "multi_ctrlr": true, 00:08:00.331 "ana_reporting": false 00:08:00.331 }, 00:08:00.331 "vs": { 00:08:00.331 "nvme_version": "1.3" 00:08:00.331 }, 00:08:00.331 "ns_data": { 00:08:00.331 "id": 1, 00:08:00.331 "can_share": true 00:08:00.331 } 00:08:00.331 } 00:08:00.331 ], 00:08:00.331 "mp_policy": "active_passive" 00:08:00.331 } 00:08:00.331 } 00:08:00.331 ] 00:08:00.590 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=823609 00:08:00.590 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:00.590 12:48:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.590 Running I/O for 10 seconds... 00:08:01.527 Latency(us) 00:08:01.527 [2024-12-15T11:48:09.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.527 Nvme0n1 : 1.00 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:08:01.527 [2024-12-15T11:48:09.434Z] =================================================================================================================== 00:08:01.527 [2024-12-15T11:48:09.434Z] Total : 23307.00 91.04 0.00 0.00 0.00 0.00 0.00 00:08:01.527 00:08:02.465 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:02.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.465 Nvme0n1 : 2.00 23390.00 91.37 0.00 0.00 0.00 0.00 0.00 00:08:02.465 [2024-12-15T11:48:10.372Z] =================================================================================================================== 00:08:02.465 [2024-12-15T11:48:10.372Z] Total : 23390.00 91.37 0.00 0.00 0.00 0.00 0.00 00:08:02.465 00:08:02.724 true 00:08:02.724 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:02.724 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.983 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.983 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.983 12:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 823609 00:08:03.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.551 Nvme0n1 : 3.00 23389.67 91.37 0.00 0.00 0.00 0.00 0.00 00:08:03.551 [2024-12-15T11:48:11.458Z] =================================================================================================================== 00:08:03.551 [2024-12-15T11:48:11.458Z] Total : 23389.67 91.37 0.00 0.00 0.00 0.00 0.00 00:08:03.551 00:08:04.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.490 Nvme0n1 : 4.00 23527.50 91.90 0.00 0.00 0.00 0.00 0.00 00:08:04.490 [2024-12-15T11:48:12.397Z] =================================================================================================================== 00:08:04.490 [2024-12-15T11:48:12.397Z] Total : 23527.50 91.90 0.00 0.00 0.00 0.00 0.00 00:08:04.490 00:08:05.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.869 Nvme0n1 : 5.00 23597.80 92.18 0.00 0.00 0.00 0.00 0.00 00:08:05.869 [2024-12-15T11:48:13.776Z] =================================================================================================================== 00:08:05.869 [2024-12-15T11:48:13.776Z] Total : 23597.80 92.18 0.00 0.00 0.00 0.00 0.00 00:08:05.869 00:08:06.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.807 Nvme0n1 : 6.00 23644.67 92.36 0.00 0.00 0.00 0.00 0.00 00:08:06.807 [2024-12-15T11:48:14.714Z] =================================================================================================================== 00:08:06.807 [2024-12-15T11:48:14.714Z] Total : 23644.67 92.36 0.00 0.00 0.00 0.00 0.00 00:08:06.807 00:08:07.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.743 Nvme0n1 : 7.00 23689.43 92.54 0.00 0.00 0.00 0.00 0.00 00:08:07.743 [2024-12-15T11:48:15.650Z] =================================================================================================================== 00:08:07.743 [2024-12-15T11:48:15.650Z] Total : 23689.43 92.54 0.00 0.00 0.00 0.00 0.00 00:08:07.743 00:08:08.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.680 Nvme0n1 : 8.00 23728.62 92.69 0.00 0.00 0.00 0.00 0.00 00:08:08.680 [2024-12-15T11:48:16.587Z] =================================================================================================================== 00:08:08.680 [2024-12-15T11:48:16.587Z] Total : 23728.62 92.69 0.00 0.00 0.00 0.00 0.00 00:08:08.680 00:08:09.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.618 Nvme0n1 : 9.00 23747.00 92.76 0.00 0.00 0.00 0.00 0.00 00:08:09.618 [2024-12-15T11:48:17.525Z] =================================================================================================================== 00:08:09.618 [2024-12-15T11:48:17.525Z] Total : 23747.00 92.76 0.00 0.00 0.00 0.00 0.00 00:08:09.618 00:08:10.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.557 Nvme0n1 : 10.00 23759.30 92.81 0.00 0.00 0.00 0.00 0.00 00:08:10.557 [2024-12-15T11:48:18.464Z] =================================================================================================================== 00:08:10.557 [2024-12-15T11:48:18.464Z] Total : 23759.30 92.81 0.00 0.00 0.00 0.00 0.00 00:08:10.557 00:08:10.557 00:08:10.557 Latency(us) 00:08:10.557 [2024-12-15T11:48:18.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.557 Nvme0n1 : 10.00 23763.02 92.82 0.00 0.00 5383.76 3151.97 11546.82 00:08:10.557 [2024-12-15T11:48:18.464Z] =================================================================================================================== 00:08:10.557 [2024-12-15T11:48:18.464Z] Total : 23763.02 92.82 0.00 0.00 5383.76 3151.97 11546.82 00:08:10.557 { 00:08:10.557 "results": [ 00:08:10.557 { 00:08:10.557 "job": "Nvme0n1", 00:08:10.557 "core_mask": "0x2", 00:08:10.557 "workload": "randwrite", 00:08:10.557 "status": "finished", 00:08:10.557 "queue_depth": 128, 00:08:10.557 "io_size": 4096, 00:08:10.557 "runtime": 10.003822, 00:08:10.557 "iops": 23763.017774606546, 00:08:10.557 "mibps": 92.82428818205682, 00:08:10.557 "io_failed": 0, 00:08:10.557 "io_timeout": 0, 00:08:10.557 "avg_latency_us": 5383.755741963218, 00:08:10.557 "min_latency_us": 3151.9695238095237, 00:08:10.557 "max_latency_us": 11546.819047619048 00:08:10.557 } 00:08:10.557 ], 00:08:10.557 "core_count": 1 00:08:10.557 } 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 823576 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 823576 ']' 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 823576 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823576 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823576' 00:08:10.557 killing process with pid 823576 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 823576 00:08:10.557 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.557 00:08:10.557 Latency(us) 00:08:10.557 [2024-12-15T11:48:18.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.557 [2024-12-15T11:48:18.464Z] =================================================================================================================== 00:08:10.557 [2024-12-15T11:48:18.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.557 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 823576 00:08:10.816 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.075 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.075 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:11.075 12:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 819849 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 819849 00:08:11.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 819849 Killed "${NVMF_APP[@]}" "$@" 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=825410 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 825410 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 825410 ']' 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.334 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.594 [2024-12-15 12:48:19.247205] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:11.594 [2024-12-15 12:48:19.247250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.594 [2024-12-15 12:48:19.326284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.594 [2024-12-15 12:48:19.347660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.594 [2024-12-15 12:48:19.347697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.594 [2024-12-15 12:48:19.347704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.594 [2024-12-15 12:48:19.347711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.594 [2024-12-15 12:48:19.347716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.594 [2024-12-15 12:48:19.348207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.594 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.852 [2024-12-15 12:48:19.637681] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:11.852 [2024-12-15 12:48:19.637771] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:11.852 [2024-12-15 12:48:19.637800] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:11.852 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:11.852 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0efc11f5-8436-49e8-b72d-b1b7416036b1 00:08:11.852 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0efc11f5-8436-49e8-b72d-b1b7416036b1 00:08:11.853 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.853 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:11.853 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.853 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.853 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:12.112 12:48:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0efc11f5-8436-49e8-b72d-b1b7416036b1 -t 2000 00:08:12.372 [ 00:08:12.372 { 00:08:12.372 "name": "0efc11f5-8436-49e8-b72d-b1b7416036b1", 00:08:12.372 "aliases": [ 00:08:12.372 "lvs/lvol" 00:08:12.372 ], 00:08:12.372 "product_name": "Logical Volume", 00:08:12.372 "block_size": 4096, 00:08:12.372 "num_blocks": 38912, 00:08:12.372 "uuid": "0efc11f5-8436-49e8-b72d-b1b7416036b1", 00:08:12.372 "assigned_rate_limits": { 00:08:12.372 "rw_ios_per_sec": 0, 00:08:12.372 "rw_mbytes_per_sec": 0, 00:08:12.372 "r_mbytes_per_sec": 0, 00:08:12.372 "w_mbytes_per_sec": 0 00:08:12.372 }, 00:08:12.372 "claimed": false, 00:08:12.372 "zoned": false, 00:08:12.372 "supported_io_types": { 00:08:12.372 "read": true, 00:08:12.372 "write": true, 00:08:12.372 "unmap": true, 00:08:12.372 "flush": false, 00:08:12.372 "reset": true, 00:08:12.372 "nvme_admin": false, 00:08:12.372 "nvme_io": false, 00:08:12.372 "nvme_io_md": false, 00:08:12.372 "write_zeroes": true, 00:08:12.372 "zcopy": false, 00:08:12.372 "get_zone_info": false, 00:08:12.372 "zone_management": false, 00:08:12.372 "zone_append": false, 00:08:12.372 "compare": false, 00:08:12.372 "compare_and_write": false, 00:08:12.372 "abort": false, 00:08:12.372 "seek_hole": true, 00:08:12.372 "seek_data": true, 00:08:12.372 "copy": false, 00:08:12.372 "nvme_iov_md": false 00:08:12.372 }, 00:08:12.372 "driver_specific": { 00:08:12.372 "lvol": { 00:08:12.372 "lvol_store_uuid": "67951733-4d18-49e7-99a0-e6603b8c4903", 00:08:12.372 "base_bdev": "aio_bdev", 00:08:12.372 "thin_provision": false, 00:08:12.372 "num_allocated_clusters": 38, 00:08:12.372 "snapshot": false, 00:08:12.372 "clone": false, 00:08:12.372 "esnap_clone": false 00:08:12.372 } 00:08:12.372 } 00:08:12.372 } 00:08:12.372 ] 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:12.372 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:12.631 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:12.631 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.890 [2024-12-15 12:48:20.602340] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:12.890 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:13.150 request: 00:08:13.150 { 00:08:13.150 "uuid": "67951733-4d18-49e7-99a0-e6603b8c4903", 00:08:13.150 "method": "bdev_lvol_get_lvstores", 00:08:13.150 "req_id": 1 00:08:13.150 } 00:08:13.150 Got JSON-RPC error response 00:08:13.150 response: 00:08:13.150 { 00:08:13.150 "code": -19, 00:08:13.150 "message": "No such device" 00:08:13.150 } 00:08:13.150 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:13.150 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.150 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.150 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.150 12:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.150 aio_bdev 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0efc11f5-8436-49e8-b72d-b1b7416036b1 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0efc11f5-8436-49e8-b72d-b1b7416036b1 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.150 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:13.409 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0efc11f5-8436-49e8-b72d-b1b7416036b1 -t 2000 00:08:13.668 [ 00:08:13.668 { 00:08:13.668 "name": "0efc11f5-8436-49e8-b72d-b1b7416036b1", 00:08:13.668 "aliases": [ 00:08:13.668 "lvs/lvol" 00:08:13.668 ], 00:08:13.668 "product_name": "Logical Volume", 00:08:13.668 "block_size": 4096, 00:08:13.668 "num_blocks": 38912, 00:08:13.668 "uuid": "0efc11f5-8436-49e8-b72d-b1b7416036b1", 00:08:13.668 "assigned_rate_limits": { 00:08:13.668 "rw_ios_per_sec": 0, 00:08:13.668 "rw_mbytes_per_sec": 0, 00:08:13.668 "r_mbytes_per_sec": 0, 00:08:13.668 "w_mbytes_per_sec": 0 00:08:13.668 }, 00:08:13.668 "claimed": false, 00:08:13.668 "zoned": false, 00:08:13.668 "supported_io_types": { 00:08:13.668 "read": true, 00:08:13.668 "write": true, 00:08:13.668 "unmap": true, 00:08:13.668 "flush": false, 00:08:13.668 "reset": true, 00:08:13.668 "nvme_admin": false, 00:08:13.668 "nvme_io": false, 00:08:13.668 "nvme_io_md": false, 00:08:13.668 "write_zeroes": true, 00:08:13.668 "zcopy": false, 00:08:13.668 "get_zone_info": false, 00:08:13.668 "zone_management": false, 00:08:13.668 "zone_append": false, 00:08:13.668 "compare": false, 00:08:13.668 "compare_and_write": false, 00:08:13.668 "abort": false, 00:08:13.668 "seek_hole": true, 00:08:13.668 "seek_data": true, 00:08:13.668 "copy": false, 00:08:13.668 "nvme_iov_md": false 00:08:13.668 }, 00:08:13.668 "driver_specific": { 00:08:13.668 "lvol": { 00:08:13.668 "lvol_store_uuid": "67951733-4d18-49e7-99a0-e6603b8c4903", 00:08:13.668 "base_bdev": "aio_bdev", 00:08:13.668 "thin_provision": false, 00:08:13.668 "num_allocated_clusters": 38, 00:08:13.668 "snapshot": false, 00:08:13.668 "clone": false, 00:08:13.668 "esnap_clone": false 00:08:13.668 } 00:08:13.668 } 00:08:13.668 } 00:08:13.668 ] 00:08:13.668 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:13.668 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:13.668 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:13.928 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:13.928 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:13.928 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:13.928 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:13.928 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0efc11f5-8436-49e8-b72d-b1b7416036b1 00:08:14.208 12:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67951733-4d18-49e7-99a0-e6603b8c4903 00:08:14.506 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:14.506 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.506 00:08:14.506 real 0m16.767s 00:08:14.506 user 0m43.518s 00:08:14.506 sys 0m3.827s 00:08:14.506 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.506 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.506 ************************************ 00:08:14.506 END TEST lvs_grow_dirty 00:08:14.506 ************************************ 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:14.789 nvmf_trace.0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.789 rmmod nvme_tcp 00:08:14.789 rmmod nvme_fabrics 00:08:14.789 rmmod nvme_keyring 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 825410 ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 825410 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 825410 ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 825410 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825410 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825410' 00:08:14.789 killing process with pid 825410 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 825410 00:08:14.789 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 825410 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.080 12:48:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.059 00:08:17.059 real 0m41.683s 00:08:17.059 user 1m4.408s 00:08:17.059 sys 0m10.194s 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 ************************************ 00:08:17.059 END TEST nvmf_lvs_grow 00:08:17.059 ************************************ 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 ************************************ 00:08:17.059 START TEST nvmf_bdev_io_wait 00:08:17.059 ************************************ 00:08:17.059 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:17.319 * Looking for test storage... 00:08:17.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.319 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:17.319 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:17.319 12:48:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.319 --rc genhtml_branch_coverage=1 00:08:17.319 --rc genhtml_function_coverage=1 00:08:17.319 --rc genhtml_legend=1 00:08:17.319 --rc geninfo_all_blocks=1 00:08:17.319 --rc geninfo_unexecuted_blocks=1 00:08:17.319 00:08:17.319 ' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.319 --rc genhtml_branch_coverage=1 00:08:17.319 --rc genhtml_function_coverage=1 00:08:17.319 --rc genhtml_legend=1 00:08:17.319 --rc geninfo_all_blocks=1 00:08:17.319 --rc geninfo_unexecuted_blocks=1 00:08:17.319 00:08:17.319 ' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.319 --rc genhtml_branch_coverage=1 00:08:17.319 --rc genhtml_function_coverage=1 00:08:17.319 --rc genhtml_legend=1 00:08:17.319 --rc geninfo_all_blocks=1 00:08:17.319 --rc geninfo_unexecuted_blocks=1 00:08:17.319 00:08:17.319 ' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.319 --rc genhtml_branch_coverage=1 00:08:17.319 --rc genhtml_function_coverage=1 00:08:17.319 --rc genhtml_legend=1 00:08:17.319 --rc geninfo_all_blocks=1 00:08:17.319 --rc geninfo_unexecuted_blocks=1 00:08:17.319 00:08:17.319 ' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.319 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.320 12:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:23.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:23.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:23.893 Found net devices under 0000:af:00.0: cvl_0_0 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.893 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:23.894 Found net devices under 0000:af:00.1: cvl_0_1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.894 12:48:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:08:23.894 00:08:23.894 --- 10.0.0.2 ping statistics --- 00:08:23.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.894 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:08:23.894 00:08:23.894 --- 10.0.0.1 ping statistics --- 00:08:23.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.894 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=829613 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 829613 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 829613 ']' 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 [2024-12-15 12:48:31.118333] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.894 [2024-12-15 12:48:31.118379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.894 [2024-12-15 12:48:31.197726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.894 [2024-12-15 12:48:31.221950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.894 [2024-12-15 12:48:31.221990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.894 [2024-12-15 12:48:31.221997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.894 [2024-12-15 12:48:31.222003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.894 [2024-12-15 12:48:31.222008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.894 [2024-12-15 12:48:31.223393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.894 [2024-12-15 12:48:31.223502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.894 [2024-12-15 12:48:31.223612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.894 [2024-12-15 12:48:31.223613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 [2024-12-15 12:48:31.379452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 Malloc0 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.894 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.895 [2024-12-15 12:48:31.426401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=829647 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=829649 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.895 { 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme$subsystem", 00:08:23.895 "trtype": "$TEST_TRANSPORT", 00:08:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "$NVMF_PORT", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.895 "hdgst": ${hdgst:-false}, 00:08:23.895 "ddgst": ${ddgst:-false} 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 } 00:08:23.895 EOF 00:08:23.895 )") 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=829651 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.895 { 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme$subsystem", 00:08:23.895 "trtype": "$TEST_TRANSPORT", 00:08:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "$NVMF_PORT", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.895 "hdgst": ${hdgst:-false}, 00:08:23.895 "ddgst": ${ddgst:-false} 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 } 00:08:23.895 EOF 00:08:23.895 )") 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=829654 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.895 { 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme$subsystem", 00:08:23.895 "trtype": "$TEST_TRANSPORT", 00:08:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "$NVMF_PORT", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.895 "hdgst": ${hdgst:-false}, 00:08:23.895 "ddgst": ${ddgst:-false} 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 } 00:08:23.895 EOF 00:08:23.895 )") 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:23.895 { 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme$subsystem", 00:08:23.895 "trtype": "$TEST_TRANSPORT", 00:08:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "$NVMF_PORT", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:23.895 "hdgst": ${hdgst:-false}, 00:08:23.895 "ddgst": ${ddgst:-false} 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 } 00:08:23.895 EOF 00:08:23.895 )") 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 829647 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme1", 00:08:23.895 "trtype": "tcp", 00:08:23.895 "traddr": "10.0.0.2", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "4420", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.895 "hdgst": false, 00:08:23.895 "ddgst": false 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 }' 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme1", 00:08:23.895 "trtype": "tcp", 00:08:23.895 "traddr": "10.0.0.2", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "4420", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.895 "hdgst": false, 00:08:23.895 "ddgst": false 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 }' 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme1", 00:08:23.895 "trtype": "tcp", 00:08:23.895 "traddr": "10.0.0.2", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "4420", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.895 "hdgst": false, 00:08:23.895 "ddgst": false 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 }' 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:23.895 12:48:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:23.895 "params": { 00:08:23.895 "name": "Nvme1", 00:08:23.895 "trtype": "tcp", 00:08:23.895 "traddr": "10.0.0.2", 00:08:23.895 "adrfam": "ipv4", 00:08:23.895 "trsvcid": "4420", 00:08:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:23.895 "hdgst": false, 00:08:23.895 "ddgst": false 00:08:23.895 }, 00:08:23.895 "method": "bdev_nvme_attach_controller" 00:08:23.895 }' 00:08:23.895 [2024-12-15 12:48:31.478649] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.895 [2024-12-15 12:48:31.478699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:23.895 [2024-12-15 12:48:31.480829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.895 [2024-12-15 12:48:31.480876] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:23.895 [2024-12-15 12:48:31.481152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.895 [2024-12-15 12:48:31.481190] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:23.896 [2024-12-15 12:48:31.483108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.896 [2024-12-15 12:48:31.483153] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:23.896 [2024-12-15 12:48:31.667913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.896 [2024-12-15 12:48:31.685024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.896 [2024-12-15 12:48:31.778364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.155 [2024-12-15 12:48:31.800830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:24.155 [2024-12-15 12:48:31.829091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.155 [2024-12-15 12:48:31.844994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:24.155 [2024-12-15 12:48:31.871642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.155 [2024-12-15 12:48:31.887451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.155 Running I/O for 1 seconds... 00:08:24.155 Running I/O for 1 seconds... 00:08:24.155 Running I/O for 1 seconds... 00:08:24.414 Running I/O for 1 seconds... 00:08:25.350 243144.00 IOPS, 949.78 MiB/s [2024-12-15T11:48:33.257Z] 11488.00 IOPS, 44.88 MiB/s [2024-12-15T11:48:33.257Z] 11416.00 IOPS, 44.59 MiB/s 00:08:25.350 Latency(us) 00:08:25.350 [2024-12-15T11:48:33.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.351 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:25.351 Nvme1n1 : 1.00 242776.18 948.34 0.00 0.00 523.86 220.40 1490.16 00:08:25.351 [2024-12-15T11:48:33.258Z] =================================================================================================================== 00:08:25.351 [2024-12-15T11:48:33.258Z] Total : 242776.18 948.34 0.00 0.00 523.86 220.40 1490.16 00:08:25.351 00:08:25.351 Latency(us) 00:08:25.351 [2024-12-15T11:48:33.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.351 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:25.351 Nvme1n1 : 1.01 11546.80 45.10 0.00 0.00 11041.62 5773.41 20347.37 00:08:25.351 [2024-12-15T11:48:33.258Z] =================================================================================================================== 00:08:25.351 [2024-12-15T11:48:33.258Z] Total : 11546.80 45.10 0.00 0.00 11041.62 5773.41 20347.37 00:08:25.351 00:08:25.351 Latency(us) 00:08:25.351 [2024-12-15T11:48:33.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.351 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:25.351 Nvme1n1 : 1.01 11476.28 44.83 0.00 0.00 11115.07 5586.16 20597.03 00:08:25.351 [2024-12-15T11:48:33.258Z] =================================================================================================================== 00:08:25.351 [2024-12-15T11:48:33.258Z] Total : 11476.28 44.83 0.00 0.00 11115.07 5586.16 20597.03 00:08:25.351 10087.00 IOPS, 39.40 MiB/s 00:08:25.351 Latency(us) 00:08:25.351 [2024-12-15T11:48:33.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.351 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:25.351 Nvme1n1 : 1.01 10158.11 39.68 0.00 0.00 12561.04 4774.77 25465.42 00:08:25.351 [2024-12-15T11:48:33.258Z] =================================================================================================================== 00:08:25.351 [2024-12-15T11:48:33.258Z] Total : 10158.11 39.68 0.00 0.00 12561.04 4774.77 25465.42 00:08:25.351 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 829649 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 829651 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 829654 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.610 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.611 rmmod nvme_tcp 00:08:25.611 rmmod nvme_fabrics 00:08:25.611 rmmod nvme_keyring 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 829613 ']' 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 829613 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 829613 ']' 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 829613 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829613 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829613' 00:08:25.611 killing process with pid 829613 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 829613 00:08:25.611 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 829613 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.870 12:48:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.781 00:08:27.781 real 0m10.753s 00:08:27.781 user 0m15.988s 00:08:27.781 sys 0m6.186s 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.781 ************************************ 00:08:27.781 END TEST nvmf_bdev_io_wait 00:08:27.781 ************************************ 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.781 12:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.041 ************************************ 00:08:28.041 START TEST nvmf_queue_depth 00:08:28.041 ************************************ 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.041 * Looking for test storage... 00:08:28.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.041 --rc genhtml_branch_coverage=1 00:08:28.041 --rc genhtml_function_coverage=1 00:08:28.041 --rc genhtml_legend=1 00:08:28.041 --rc geninfo_all_blocks=1 00:08:28.041 --rc geninfo_unexecuted_blocks=1 00:08:28.041 00:08:28.041 ' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.041 --rc genhtml_branch_coverage=1 00:08:28.041 --rc genhtml_function_coverage=1 00:08:28.041 --rc genhtml_legend=1 00:08:28.041 --rc geninfo_all_blocks=1 00:08:28.041 --rc geninfo_unexecuted_blocks=1 00:08:28.041 00:08:28.041 ' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.041 --rc genhtml_branch_coverage=1 00:08:28.041 --rc genhtml_function_coverage=1 00:08:28.041 --rc genhtml_legend=1 00:08:28.041 --rc geninfo_all_blocks=1 00:08:28.041 --rc geninfo_unexecuted_blocks=1 00:08:28.041 00:08:28.041 ' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.041 --rc genhtml_branch_coverage=1 00:08:28.041 --rc genhtml_function_coverage=1 00:08:28.041 --rc genhtml_legend=1 00:08:28.041 --rc geninfo_all_blocks=1 00:08:28.041 --rc geninfo_unexecuted_blocks=1 00:08:28.041 00:08:28.041 ' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.041 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.042 12:48:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.615 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:34.616 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:34.616 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:34.616 Found net devices under 0000:af:00.0: cvl_0_0 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:34.616 Found net devices under 0000:af:00.1: cvl_0_1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:08:34.616 00:08:34.616 --- 10.0.0.2 ping statistics --- 00:08:34.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.616 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:34.616 00:08:34.616 --- 10.0.0.1 ping statistics --- 00:08:34.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.616 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=833561 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 833561 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833561 ']' 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.616 12:48:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.616 [2024-12-15 12:48:41.913083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:34.616 [2024-12-15 12:48:41.913131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.616 [2024-12-15 12:48:41.992862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.617 [2024-12-15 12:48:42.014036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.617 [2024-12-15 12:48:42.014071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.617 [2024-12-15 12:48:42.014078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.617 [2024-12-15 12:48:42.014084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.617 [2024-12-15 12:48:42.014089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.617 [2024-12-15 12:48:42.014603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 [2024-12-15 12:48:42.152992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 Malloc0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 [2024-12-15 12:48:42.203260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=833607 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 833607 /var/tmp/bdevperf.sock 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833607 ']' 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.617 [2024-12-15 12:48:42.251733] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:34.617 [2024-12-15 12:48:42.251771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833607 ] 00:08:34.617 [2024-12-15 12:48:42.325441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.617 [2024-12-15 12:48:42.347579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.617 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.876 NVMe0n1 00:08:34.876 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.876 12:48:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.876 Running I/O for 10 seconds... 00:08:37.192 11594.00 IOPS, 45.29 MiB/s [2024-12-15T11:48:46.035Z] 12099.00 IOPS, 47.26 MiB/s [2024-12-15T11:48:46.972Z] 12268.67 IOPS, 47.92 MiB/s [2024-12-15T11:48:47.956Z] 12279.00 IOPS, 47.96 MiB/s [2024-12-15T11:48:48.894Z] 12285.40 IOPS, 47.99 MiB/s [2024-12-15T11:48:49.831Z] 12350.67 IOPS, 48.24 MiB/s [2024-12-15T11:48:51.210Z] 12412.00 IOPS, 48.48 MiB/s [2024-12-15T11:48:52.147Z] 12400.75 IOPS, 48.44 MiB/s [2024-12-15T11:48:53.084Z] 12437.67 IOPS, 48.58 MiB/s [2024-12-15T11:48:53.084Z] 12473.20 IOPS, 48.72 MiB/s 00:08:45.177 Latency(us) 00:08:45.177 [2024-12-15T11:48:53.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.177 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:45.177 Verification LBA range: start 0x0 length 0x4000 00:08:45.177 NVMe0n1 : 10.06 12505.17 48.85 0.00 0.00 81644.77 17476.27 54675.75 00:08:45.177 [2024-12-15T11:48:53.084Z] =================================================================================================================== 00:08:45.177 [2024-12-15T11:48:53.084Z] Total : 12505.17 48.85 0.00 0.00 81644.77 17476.27 54675.75 00:08:45.177 { 00:08:45.177 "results": [ 00:08:45.177 { 00:08:45.177 "job": "NVMe0n1", 00:08:45.177 "core_mask": "0x1", 00:08:45.177 "workload": "verify", 00:08:45.177 "status": "finished", 00:08:45.177 "verify_range": { 00:08:45.177 "start": 0, 00:08:45.177 "length": 16384 00:08:45.177 }, 00:08:45.177 "queue_depth": 1024, 00:08:45.177 "io_size": 4096, 00:08:45.177 "runtime": 10.056319, 00:08:45.177 "iops": 12505.172121131001, 00:08:45.177 "mibps": 48.848328598167974, 00:08:45.177 "io_failed": 0, 00:08:45.177 "io_timeout": 0, 00:08:45.177 "avg_latency_us": 81644.7676403133, 00:08:45.177 "min_latency_us": 17476.266666666666, 00:08:45.177 "max_latency_us": 54675.74857142857 00:08:45.177 } 00:08:45.177 ], 00:08:45.177 "core_count": 1 00:08:45.177 } 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 833607 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833607 ']' 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833607 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833607 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833607' 00:08:45.177 killing process with pid 833607 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833607 00:08:45.177 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.177 00:08:45.177 Latency(us) 00:08:45.177 [2024-12-15T11:48:53.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.177 [2024-12-15T11:48:53.084Z] =================================================================================================================== 00:08:45.177 [2024-12-15T11:48:53.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.177 12:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833607 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.177 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.177 rmmod nvme_tcp 00:08:45.436 rmmod nvme_fabrics 00:08:45.436 rmmod nvme_keyring 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 833561 ']' 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 833561 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833561 ']' 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833561 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833561 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833561' 00:08:45.436 killing process with pid 833561 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833561 00:08:45.436 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833561 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.695 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.696 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.696 12:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.601 00:08:47.601 real 0m19.712s 00:08:47.601 user 0m23.223s 00:08:47.601 sys 0m5.954s 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.601 ************************************ 00:08:47.601 END TEST nvmf_queue_depth 00:08:47.601 ************************************ 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.601 ************************************ 00:08:47.601 START TEST nvmf_target_multipath 00:08:47.601 ************************************ 00:08:47.601 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.860 * Looking for test storage... 00:08:47.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.860 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.861 --rc genhtml_branch_coverage=1 00:08:47.861 --rc genhtml_function_coverage=1 00:08:47.861 --rc genhtml_legend=1 00:08:47.861 --rc geninfo_all_blocks=1 00:08:47.861 --rc geninfo_unexecuted_blocks=1 00:08:47.861 00:08:47.861 ' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.861 --rc genhtml_branch_coverage=1 00:08:47.861 --rc genhtml_function_coverage=1 00:08:47.861 --rc genhtml_legend=1 00:08:47.861 --rc geninfo_all_blocks=1 00:08:47.861 --rc geninfo_unexecuted_blocks=1 00:08:47.861 00:08:47.861 ' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.861 --rc genhtml_branch_coverage=1 00:08:47.861 --rc genhtml_function_coverage=1 00:08:47.861 --rc genhtml_legend=1 00:08:47.861 --rc geninfo_all_blocks=1 00:08:47.861 --rc geninfo_unexecuted_blocks=1 00:08:47.861 00:08:47.861 ' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.861 --rc genhtml_branch_coverage=1 00:08:47.861 --rc genhtml_function_coverage=1 00:08:47.861 --rc genhtml_legend=1 00:08:47.861 --rc geninfo_all_blocks=1 00:08:47.861 --rc geninfo_unexecuted_blocks=1 00:08:47.861 00:08:47.861 ' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.861 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.862 12:48:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.431 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.431 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.432 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:08:54.432 00:08:54.432 --- 10.0.0.2 ping statistics --- 00:08:54.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.432 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:08:54.432 00:08:54.432 --- 10.0.0.1 ping statistics --- 00:08:54.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.432 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:54.432 only one NIC for nvmf test 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.432 rmmod nvme_tcp 00:08:54.432 rmmod nvme_fabrics 00:08:54.432 rmmod nvme_keyring 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.432 12:49:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.539 00:08:56.539 real 0m8.416s 00:08:56.539 user 0m1.871s 00:08:56.539 sys 0m4.502s 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:56.539 ************************************ 00:08:56.539 END TEST nvmf_target_multipath 00:08:56.539 ************************************ 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.539 ************************************ 00:08:56.539 START TEST nvmf_zcopy 00:08:56.539 ************************************ 00:08:56.539 12:49:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:56.539 * Looking for test storage... 00:08:56.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.540 --rc genhtml_branch_coverage=1 00:08:56.540 --rc genhtml_function_coverage=1 00:08:56.540 --rc genhtml_legend=1 00:08:56.540 --rc geninfo_all_blocks=1 00:08:56.540 --rc geninfo_unexecuted_blocks=1 00:08:56.540 00:08:56.540 ' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.540 --rc genhtml_branch_coverage=1 00:08:56.540 --rc genhtml_function_coverage=1 00:08:56.540 --rc genhtml_legend=1 00:08:56.540 --rc geninfo_all_blocks=1 00:08:56.540 --rc geninfo_unexecuted_blocks=1 00:08:56.540 00:08:56.540 ' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.540 --rc genhtml_branch_coverage=1 00:08:56.540 --rc genhtml_function_coverage=1 00:08:56.540 --rc genhtml_legend=1 00:08:56.540 --rc geninfo_all_blocks=1 00:08:56.540 --rc geninfo_unexecuted_blocks=1 00:08:56.540 00:08:56.540 ' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.540 --rc genhtml_branch_coverage=1 00:08:56.540 --rc genhtml_function_coverage=1 00:08:56.540 --rc genhtml_legend=1 00:08:56.540 --rc geninfo_all_blocks=1 00:08:56.540 --rc geninfo_unexecuted_blocks=1 00:08:56.540 00:08:56.540 ' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.540 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.541 12:49:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.112 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.112 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.112 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.113 12:49:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:09:03.113 00:09:03.113 --- 10.0.0.2 ping statistics --- 00:09:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.113 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:09:03.113 00:09:03.113 --- 10.0.0.1 ping statistics --- 00:09:03.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.113 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=842342 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 842342 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 842342 ']' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 [2024-12-15 12:49:10.178779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.113 [2024-12-15 12:49:10.178833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.113 [2024-12-15 12:49:10.254839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.113 [2024-12-15 12:49:10.275213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.113 [2024-12-15 12:49:10.275248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.113 [2024-12-15 12:49:10.275255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.113 [2024-12-15 12:49:10.275261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.113 [2024-12-15 12:49:10.275266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.113 [2024-12-15 12:49:10.275753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 [2024-12-15 12:49:10.418046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 [2024-12-15 12:49:10.438253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 malloc0 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.113 { 00:09:03.113 "params": { 00:09:03.113 "name": "Nvme$subsystem", 00:09:03.113 "trtype": "$TEST_TRANSPORT", 00:09:03.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.113 "adrfam": "ipv4", 00:09:03.113 "trsvcid": "$NVMF_PORT", 00:09:03.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.113 "hdgst": ${hdgst:-false}, 00:09:03.113 "ddgst": ${ddgst:-false} 00:09:03.113 }, 00:09:03.113 "method": "bdev_nvme_attach_controller" 00:09:03.113 } 00:09:03.113 EOF 00:09:03.113 )") 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.113 12:49:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.113 "params": { 00:09:03.113 "name": "Nvme1", 00:09:03.113 "trtype": "tcp", 00:09:03.113 "traddr": "10.0.0.2", 00:09:03.114 "adrfam": "ipv4", 00:09:03.114 "trsvcid": "4420", 00:09:03.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.114 "hdgst": false, 00:09:03.114 "ddgst": false 00:09:03.114 }, 00:09:03.114 "method": "bdev_nvme_attach_controller" 00:09:03.114 }' 00:09:03.114 [2024-12-15 12:49:10.522822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.114 [2024-12-15 12:49:10.522866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842395 ] 00:09:03.114 [2024-12-15 12:49:10.597761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.114 [2024-12-15 12:49:10.620287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.114 Running I/O for 10 seconds... 00:09:05.429 8799.00 IOPS, 68.74 MiB/s [2024-12-15T11:49:14.272Z] 8851.50 IOPS, 69.15 MiB/s [2024-12-15T11:49:15.210Z] 8876.33 IOPS, 69.35 MiB/s [2024-12-15T11:49:16.148Z] 8896.00 IOPS, 69.50 MiB/s [2024-12-15T11:49:17.085Z] 8903.00 IOPS, 69.55 MiB/s [2024-12-15T11:49:18.023Z] 8905.83 IOPS, 69.58 MiB/s [2024-12-15T11:49:18.960Z] 8909.29 IOPS, 69.60 MiB/s [2024-12-15T11:49:20.338Z] 8916.62 IOPS, 69.66 MiB/s [2024-12-15T11:49:21.276Z] 8914.56 IOPS, 69.64 MiB/s [2024-12-15T11:49:21.276Z] 8902.00 IOPS, 69.55 MiB/s 00:09:13.369 Latency(us) 00:09:13.369 [2024-12-15T11:49:21.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.369 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:13.369 Verification LBA range: start 0x0 length 0x1000 00:09:13.369 Nvme1n1 : 10.01 8904.97 69.57 0.00 0.00 14333.06 2434.19 22719.15 00:09:13.369 [2024-12-15T11:49:21.276Z] =================================================================================================================== 00:09:13.369 [2024-12-15T11:49:21.276Z] Total : 8904.97 69.57 0.00 0.00 14333.06 2434.19 22719.15 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844153 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.369 { 00:09:13.369 "params": { 00:09:13.369 "name": "Nvme$subsystem", 00:09:13.369 "trtype": "$TEST_TRANSPORT", 00:09:13.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.369 "adrfam": "ipv4", 00:09:13.369 "trsvcid": "$NVMF_PORT", 00:09:13.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.369 "hdgst": ${hdgst:-false}, 00:09:13.369 "ddgst": ${ddgst:-false} 00:09:13.369 }, 00:09:13.369 "method": "bdev_nvme_attach_controller" 00:09:13.369 } 00:09:13.369 EOF 00:09:13.369 )") 00:09:13.369 [2024-12-15 12:49:21.090442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.090477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:13.369 12:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.369 "params": { 00:09:13.369 "name": "Nvme1", 00:09:13.369 "trtype": "tcp", 00:09:13.369 "traddr": "10.0.0.2", 00:09:13.369 "adrfam": "ipv4", 00:09:13.369 "trsvcid": "4420", 00:09:13.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.369 "hdgst": false, 00:09:13.369 "ddgst": false 00:09:13.369 }, 00:09:13.369 "method": "bdev_nvme_attach_controller" 00:09:13.369 }' 00:09:13.369 [2024-12-15 12:49:21.102432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.102444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.114461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.114476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.126491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.126501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.134686] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:13.369 [2024-12-15 12:49:21.134728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844153 ] 00:09:13.369 [2024-12-15 12:49:21.138524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.138534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.150552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.150562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.162586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.162595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.174617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.174626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.186650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.186660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.198679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.198688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.210209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.369 [2024-12-15 12:49:21.210720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.210735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.222749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.222764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.232489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.369 [2024-12-15 12:49:21.234778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.234789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.246820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.246842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.258850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.258867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.369 [2024-12-15 12:49:21.270878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.369 [2024-12-15 12:49:21.270892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.282903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.282916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.294938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.294952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.306969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.306984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.319221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.319241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.331244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.331261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.343275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.343290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.355310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.355326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.367343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.367358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.379384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.379399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.391409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.391425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 Running I/O for 5 seconds... 00:09:13.628 [2024-12-15 12:49:21.407974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.407993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.419135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.419155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.433312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.433331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.446780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.446799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.628 [2024-12-15 12:49:21.460201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.628 [2024-12-15 12:49:21.460222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.629 [2024-12-15 12:49:21.474037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.629 [2024-12-15 12:49:21.474057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.629 [2024-12-15 12:49:21.487472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.629 [2024-12-15 12:49:21.487490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.629 [2024-12-15 12:49:21.500923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.629 [2024-12-15 12:49:21.500941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.629 [2024-12-15 12:49:21.514345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.629 [2024-12-15 12:49:21.514363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.629 [2024-12-15 12:49:21.527771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.629 [2024-12-15 12:49:21.527791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.536601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.536620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.546235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.546254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.560211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.560230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.573852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.573871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.587465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.587484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.600816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.600840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.614393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.614411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.627831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.627849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.641006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.641028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.654381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.654400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.667925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.667944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.681321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.681341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.694554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.694572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.708358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.708378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.722158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.722175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.735112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.735130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.748803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.748828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.762355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.762373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.775478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.775495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.888 [2024-12-15 12:49:21.788873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.888 [2024-12-15 12:49:21.788891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.802710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.802730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.817268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.817286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.828213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.828231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.842582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.842599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.856162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.856181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.869167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.869188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.882808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.147 [2024-12-15 12:49:21.882833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.147 [2024-12-15 12:49:21.891570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.891588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.905658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.905676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.919138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.919157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.932822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.932846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.946521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.946539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.960013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.960031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.973471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.973489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.986678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.986697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:21.995797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:21.995816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:22.009809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:22.009833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:22.022773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:22.022790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:22.036651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:22.036669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.148 [2024-12-15 12:49:22.050838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.148 [2024-12-15 12:49:22.050856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.064227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.064246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.078110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.078128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.087051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.087069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.100921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.100939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.114469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.114488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.128041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.128059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.141439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.141457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.155370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.155392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.169438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.169455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.183447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.183464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.196711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.196729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.205566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.205585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.219347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.219366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.407 [2024-12-15 12:49:22.232566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.407 [2024-12-15 12:49:22.232585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.246370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.246388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.255399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.255420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.269373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.269391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.282915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.282934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.296200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.296219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.408 [2024-12-15 12:49:22.309843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.408 [2024-12-15 12:49:22.309862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.323017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.323035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.336652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.336670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.350148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.350166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.363793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.363812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.377147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.377165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.390828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.390847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 17174.00 IOPS, 134.17 MiB/s [2024-12-15T11:49:22.574Z] [2024-12-15 12:49:22.404649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.404669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.418106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.418124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.431839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.431857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.445176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.445194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.458829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.458847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.472424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.472443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.486457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.486477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.499836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.499854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.513560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.513578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.527089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.527107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.540395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.540418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.554308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.554326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.667 [2024-12-15 12:49:22.568171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.667 [2024-12-15 12:49:22.568205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.581553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.581572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.595404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.595421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.609152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.609171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.622388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.622405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.636091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.636110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.649884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.649903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.663331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.663349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.676982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.677000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.690629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.690647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.703805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.703831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.717593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.717612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.731179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.731199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.740665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.740684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.754423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.754442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.767831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.767850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.781529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.781549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.795009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.795032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.808620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.808639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.927 [2024-12-15 12:49:22.822283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.927 [2024-12-15 12:49:22.822302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.835888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.835908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.849821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.849848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.863190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.863209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.877197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.877216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.890532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.890551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.904414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.904434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.918104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.918123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.931843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.931862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.944784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.944802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.958503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.958521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.972440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.972459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:22.986513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:22.986532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.000218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.000236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.014008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.014027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.023058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.023076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.032499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.032518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.046955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.046979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.060411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.060431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.074330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.074349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.187 [2024-12-15 12:49:23.087829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.187 [2024-12-15 12:49:23.087846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.101276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.101295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.115185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.115204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.128857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.128875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.142078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.142097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.155747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.155764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.169486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.169504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.183142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.183160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.197118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.197137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.205921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.205941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.220222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.220240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.229264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.229282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.242744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.242762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.256384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.256402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.270051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.270070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.283273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.283292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.292171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.292189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.306344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.306362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.315598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.315616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.324841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.324859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.334145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.334163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.446 [2024-12-15 12:49:23.343534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.446 [2024-12-15 12:49:23.343552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.357769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.357789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.371232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.371249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.384839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.384857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 17229.00 IOPS, 134.60 MiB/s [2024-12-15T11:49:23.612Z] [2024-12-15 12:49:23.398857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.398874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.408519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.408537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.422527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.422545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.436091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.436108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.450231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.705 [2024-12-15 12:49:23.450249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.705 [2024-12-15 12:49:23.463734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.463751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.477160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.477178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.490908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.490927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.504344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.504363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.517768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.517789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.531943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.531962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.547511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.547529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.561601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.561618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.575168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.575186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.588892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.588910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.706 [2024-12-15 12:49:23.602431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.706 [2024-12-15 12:49:23.602454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.616166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.616186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.629742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.629761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.638835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.638853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.648174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.648192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.662374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.662392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.675581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.675599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.689639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.689657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.703354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.703371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.716745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.716763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.730336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.730353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.744031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.744049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.757958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.757975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.766880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.766897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.781001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.781019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.795325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.795342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.808886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.808904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.822262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.822280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.835938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.835956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.849658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.849676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.965 [2024-12-15 12:49:23.862906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.965 [2024-12-15 12:49:23.862925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.224 [2024-12-15 12:49:23.876777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.224 [2024-12-15 12:49:23.876795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.224 [2024-12-15 12:49:23.890125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.224 [2024-12-15 12:49:23.890143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.224 [2024-12-15 12:49:23.904098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.224 [2024-12-15 12:49:23.904116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.224 [2024-12-15 12:49:23.917767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.224 [2024-12-15 12:49:23.917784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.926882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.926899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.940980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.940998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.954735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.954753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.968083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.968101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.981637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.981655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:23.996026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:23.996044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.009489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.009506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.023008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.023030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.032370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.032388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.046536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.046554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.059900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.059918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.073821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.073846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.087381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.087400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.101070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.101090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.115048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.115066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.225 [2024-12-15 12:49:24.128613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.225 [2024-12-15 12:49:24.128632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.142319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.142338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.156143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.156162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.169499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.169517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.179310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.179329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.192881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.192901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.206262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.206281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.219614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.219633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.233264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.233282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.246556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.246574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.255137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.255155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.269313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.269337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.282834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.282853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.296099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.296119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.309974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.309993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.323503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.323522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.332183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.332201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.346226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.346244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.359890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.359909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.372931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.372950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.484 [2024-12-15 12:49:24.386901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.484 [2024-12-15 12:49:24.386922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 17230.67 IOPS, 134.61 MiB/s [2024-12-15T11:49:24.650Z] [2024-12-15 12:49:24.401051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.401070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.414759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.414778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.428567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.428586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.442161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.442180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.456159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.456177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.465726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.465743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.479955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.479973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.493753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.493771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.507264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.507283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.515978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.516000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.525111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.525128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.539586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.539606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.553092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.553112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.566681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.566699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.580421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.580439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.594589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.594606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.608248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.608266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.621953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.621972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.743 [2024-12-15 12:49:24.635718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.743 [2024-12-15 12:49:24.635736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.649627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.649646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.663026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.663044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.676932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.676950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.690840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.690859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.704583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.704601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.717808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.717831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.731909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.731927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.743334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.743353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.757050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.757068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.770728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.770747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.779642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.779660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.793610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.793628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.807178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.002 [2024-12-15 12:49:24.807196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.002 [2024-12-15 12:49:24.820704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.820723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.834299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.834316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.847711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.847729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.857031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.857049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.871153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.871171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.880116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.880134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.893917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.893942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.003 [2024-12-15 12:49:24.908188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.003 [2024-12-15 12:49:24.908206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.923885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.923904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.937796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.937815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.951262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.951280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.960169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.960186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.974202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.974219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:24.987932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:24.987950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.001486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.001504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.015522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.015540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.029052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.029070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.043043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.043061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.056497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.056515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.070240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.070258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.083614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.083632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.096907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.096925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.110341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.110358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.124326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.124344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.138127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.138146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.151477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.151495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.262 [2024-12-15 12:49:25.165005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.262 [2024-12-15 12:49:25.165023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.178518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.178536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.187325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.187342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.201426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.201443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.215026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.215044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.228923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.228941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.242795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.242813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.256566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.256585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.270444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.270462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.284323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.284340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.298097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.298115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.311267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.311285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.321101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.321119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.521 [2024-12-15 12:49:25.335313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.521 [2024-12-15 12:49:25.335331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.344008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.344027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.357855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.357873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.371338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.371356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.385062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.385090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.398387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.398405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 17222.00 IOPS, 134.55 MiB/s [2024-12-15T11:49:25.429Z] [2024-12-15 12:49:25.412141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.412160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.522 [2024-12-15 12:49:25.426056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.522 [2024-12-15 12:49:25.426075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.434929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.434948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.448936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.448955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.462265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.462285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.476283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.476303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.490318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.490336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.504144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.504168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.517981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.517999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.531544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.531563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.544935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.544953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.558618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.558637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.567621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.567640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.581515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.581534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.595294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.595313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.609311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.609330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.622567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.622585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.636296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.636315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.649720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.649739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.663300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.663318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.781 [2024-12-15 12:49:25.676886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.781 [2024-12-15 12:49:25.676905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.040 [2024-12-15 12:49:25.690894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.040 [2024-12-15 12:49:25.690914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.040 [2024-12-15 12:49:25.704162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.040 [2024-12-15 12:49:25.704181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.717976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.717995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.731920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.731938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.740770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.740788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.750418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.750445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.759578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.759596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.769225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.769243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.783169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.783188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.796846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.796865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.810510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.810528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.823987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.824005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.837326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.837344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.851106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.851125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.864634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.864652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.878242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.878260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.891975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.891994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.905745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.905765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.919281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.919299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.932865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.932882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.041 [2024-12-15 12:49:25.946331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.041 [2024-12-15 12:49:25.946349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:25.955660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:25.955678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:25.969403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:25.969420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:25.983456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:25.983475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:25.997481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:25.997503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.010958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.010980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.024694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.024713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.038509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.038527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.052572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.052591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.066246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.066268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.079960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.079977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.093570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.093588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.102454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.102471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.116844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.116862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.130681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.130699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.144310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.144328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.158010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.158027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.167421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.167439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.300 [2024-12-15 12:49:26.181381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.300 [2024-12-15 12:49:26.181399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.301 [2024-12-15 12:49:26.195148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.301 [2024-12-15 12:49:26.195166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.208674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.208693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.222437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.222456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.235598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.235617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.249452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.249471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.262917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.262935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.271595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.271612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.280774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.280791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.290240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.290257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.304471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.304488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.318050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.318069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.332088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.332106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.346021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.346039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.359687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.359704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.373980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.373999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.560 [2024-12-15 12:49:26.387806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.560 [2024-12-15 12:49:26.387829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 [2024-12-15 12:49:26.396750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.396767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 17196.00 IOPS, 134.34 MiB/s [2024-12-15T11:49:26.468Z] [2024-12-15 12:49:26.410431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.410449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 00:09:18.561 Latency(us) 00:09:18.561 [2024-12-15T11:49:26.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.561 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:18.561 Nvme1n1 : 5.01 17198.84 134.37 0.00 0.00 7435.00 3370.42 18225.25 00:09:18.561 [2024-12-15T11:49:26.468Z] =================================================================================================================== 00:09:18.561 [2024-12-15T11:49:26.468Z] Total : 17198.84 134.37 0.00 0.00 7435.00 3370.42 18225.25 00:09:18.561 [2024-12-15 12:49:26.419699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.419715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 [2024-12-15 12:49:26.431725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.431739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 [2024-12-15 12:49:26.443772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.443792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.561 [2024-12-15 12:49:26.455795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.561 [2024-12-15 12:49:26.455812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.467834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.467851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.479875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.479890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.491906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.491922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.503932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.503949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.515966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.515981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.527988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.527997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.540026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.540039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.552053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.552064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 [2024-12-15 12:49:26.564087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.820 [2024-12-15 12:49:26.564097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844153) - No such process 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844153 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.820 delay0 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.820 12:49:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:19.080 [2024-12-15 12:49:26.755936] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.652 Initializing NVMe Controllers 00:09:25.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:25.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:25.652 Initialization complete. Launching workers. 00:09:25.652 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 759 00:09:25.652 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1046, failed to submit 33 00:09:25.652 success 867, unsuccessful 179, failed 0 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.652 rmmod nvme_tcp 00:09:25.652 rmmod nvme_fabrics 00:09:25.652 rmmod nvme_keyring 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 842342 ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 842342 ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842342' 00:09:25.652 killing process with pid 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 842342 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.652 12:49:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.558 00:09:27.558 real 0m31.427s 00:09:27.558 user 0m42.274s 00:09:27.558 sys 0m10.886s 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.558 ************************************ 00:09:27.558 END TEST nvmf_zcopy 00:09:27.558 ************************************ 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.558 12:49:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.818 ************************************ 00:09:27.818 START TEST nvmf_nmic 00:09:27.818 ************************************ 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:27.818 * Looking for test storage... 00:09:27.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.818 --rc genhtml_branch_coverage=1 00:09:27.818 --rc genhtml_function_coverage=1 00:09:27.818 --rc genhtml_legend=1 00:09:27.818 --rc geninfo_all_blocks=1 00:09:27.818 --rc geninfo_unexecuted_blocks=1 00:09:27.818 00:09:27.818 ' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.818 --rc genhtml_branch_coverage=1 00:09:27.818 --rc genhtml_function_coverage=1 00:09:27.818 --rc genhtml_legend=1 00:09:27.818 --rc geninfo_all_blocks=1 00:09:27.818 --rc geninfo_unexecuted_blocks=1 00:09:27.818 00:09:27.818 ' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.818 --rc genhtml_branch_coverage=1 00:09:27.818 --rc genhtml_function_coverage=1 00:09:27.818 --rc genhtml_legend=1 00:09:27.818 --rc geninfo_all_blocks=1 00:09:27.818 --rc geninfo_unexecuted_blocks=1 00:09:27.818 00:09:27.818 ' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.818 --rc genhtml_branch_coverage=1 00:09:27.818 --rc genhtml_function_coverage=1 00:09:27.818 --rc genhtml_legend=1 00:09:27.818 --rc geninfo_all_blocks=1 00:09:27.818 --rc geninfo_unexecuted_blocks=1 00:09:27.818 00:09:27.818 ' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.818 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.819 12:49:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:34.394 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:34.394 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:34.394 Found net devices under 0000:af:00.0: cvl_0_0 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.394 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:34.395 Found net devices under 0000:af:00.1: cvl_0_1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:09:34.395 00:09:34.395 --- 10.0.0.2 ping statistics --- 00:09:34.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.395 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:34.395 00:09:34.395 --- 10.0.0.1 ping statistics --- 00:09:34.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.395 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=849637 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 849637 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 849637 ']' 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 [2024-12-15 12:49:41.794116] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:34.395 [2024-12-15 12:49:41.794163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.395 [2024-12-15 12:49:41.874583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.395 [2024-12-15 12:49:41.898724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.395 [2024-12-15 12:49:41.898761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.395 [2024-12-15 12:49:41.898768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.395 [2024-12-15 12:49:41.898775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.395 [2024-12-15 12:49:41.898779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.395 [2024-12-15 12:49:41.900170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.395 [2024-12-15 12:49:41.900284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.395 [2024-12-15 12:49:41.900364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.395 [2024-12-15 12:49:41.900365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.395 12:49:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 [2024-12-15 12:49:42.033197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 Malloc0 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.395 [2024-12-15 12:49:42.105430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:34.395 test case1: single bdev can't be used in multiple subsystems 00:09:34.395 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.396 [2024-12-15 12:49:42.133341] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:34.396 [2024-12-15 12:49:42.133362] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:34.396 [2024-12-15 12:49:42.133374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.396 request: 00:09:34.396 { 00:09:34.396 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:34.396 "namespace": { 00:09:34.396 "bdev_name": "Malloc0", 00:09:34.396 "no_auto_visible": false, 00:09:34.396 "hide_metadata": false 00:09:34.396 }, 00:09:34.396 "method": "nvmf_subsystem_add_ns", 00:09:34.396 "req_id": 1 00:09:34.396 } 00:09:34.396 Got JSON-RPC error response 00:09:34.396 response: 00:09:34.396 { 00:09:34.396 "code": -32602, 00:09:34.396 "message": "Invalid parameters" 00:09:34.396 } 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:34.396 Adding namespace failed - expected result. 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:34.396 test case2: host connect to nvmf target in multiple paths 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.396 [2024-12-15 12:49:42.145482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.396 12:49:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.773 12:49:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:36.710 12:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.710 12:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:36.710 12:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.710 12:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:36.710 12:49:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:38.621 12:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:38.621 [global] 00:09:38.621 thread=1 00:09:38.621 invalidate=1 00:09:38.621 rw=write 00:09:38.621 time_based=1 00:09:38.621 runtime=1 00:09:38.621 ioengine=libaio 00:09:38.621 direct=1 00:09:38.621 bs=4096 00:09:38.621 iodepth=1 00:09:38.621 norandommap=0 00:09:38.621 numjobs=1 00:09:38.621 00:09:38.621 verify_dump=1 00:09:38.621 verify_backlog=512 00:09:38.621 verify_state_save=0 00:09:38.621 do_verify=1 00:09:38.621 verify=crc32c-intel 00:09:38.621 [job0] 00:09:38.621 filename=/dev/nvme0n1 00:09:38.621 Could not set queue depth (nvme0n1) 00:09:38.880 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.881 fio-3.35 00:09:38.881 Starting 1 thread 00:09:40.260 00:09:40.260 job0: (groupid=0, jobs=1): err= 0: pid=850687: Sun Dec 15 12:49:47 2024 00:09:40.260 read: IOPS=1009, BW=4040KiB/s (4137kB/s)(4044KiB/1001msec) 00:09:40.260 slat (nsec): min=6766, max=28064, avg=7842.34, stdev=2215.53 00:09:40.260 clat (usec): min=151, max=42152, avg=813.23, stdev=4969.37 00:09:40.260 lat (usec): min=159, max=42161, avg=821.07, stdev=4970.91 00:09:40.260 clat percentiles (usec): 00:09:40.260 | 1.00th=[ 159], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 198], 00:09:40.260 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:09:40.260 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 231], 00:09:40.260 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:40.260 | 99.99th=[42206] 00:09:40.260 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:40.260 slat (usec): min=9, max=27691, avg=37.82, stdev=865.02 00:09:40.260 clat (usec): min=104, max=288, avg=123.12, stdev=15.56 00:09:40.260 lat (usec): min=114, max=27977, avg=160.94, stdev=870.26 00:09:40.260 clat percentiles (usec): 00:09:40.260 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:09:40.260 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:09:40.260 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 145], 95.00th=[ 155], 00:09:40.260 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 285], 99.95th=[ 289], 00:09:40.260 | 99.99th=[ 289] 00:09:40.260 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:40.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:40.260 lat (usec) : 250=98.57%, 500=0.69% 00:09:40.260 lat (msec) : 50=0.74% 00:09:40.260 cpu : usr=1.00%, sys=2.00%, ctx=2037, majf=0, minf=1 00:09:40.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.261 issued rwts: total=1011,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.261 00:09:40.261 Run status group 0 (all jobs): 00:09:40.261 READ: bw=4040KiB/s (4137kB/s), 4040KiB/s-4040KiB/s (4137kB/s-4137kB/s), io=4044KiB (4141kB), run=1001-1001msec 00:09:40.261 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:40.261 00:09:40.261 Disk stats (read/write): 00:09:40.261 nvme0n1: ios=862/1024, merge=0/0, ticks=1688/124, in_queue=1812, util=98.60% 00:09:40.261 12:49:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.261 rmmod nvme_tcp 00:09:40.261 rmmod nvme_fabrics 00:09:40.261 rmmod nvme_keyring 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 849637 ']' 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 849637 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 849637 ']' 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 849637 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.261 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849637 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849637' 00:09:40.520 killing process with pid 849637 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 849637 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 849637 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.520 12:49:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.057 00:09:43.057 real 0m14.966s 00:09:43.057 user 0m32.893s 00:09:43.057 sys 0m5.312s 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 ************************************ 00:09:43.057 END TEST nvmf_nmic 00:09:43.057 ************************************ 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 ************************************ 00:09:43.057 START TEST nvmf_fio_target 00:09:43.057 ************************************ 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.057 * Looking for test storage... 00:09:43.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.057 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.058 12:49:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.631 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.632 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.632 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.632 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.632 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:09:49.632 00:09:49.632 --- 10.0.0.2 ping statistics --- 00:09:49.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.632 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:49.632 00:09:49.632 --- 10.0.0.1 ping statistics --- 00:09:49.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.632 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=854389 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 854389 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 854389 ']' 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.632 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.632 [2024-12-15 12:49:56.791391] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:49.632 [2024-12-15 12:49:56.791437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.632 [2024-12-15 12:49:56.872416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.632 [2024-12-15 12:49:56.895441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.632 [2024-12-15 12:49:56.895477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.633 [2024-12-15 12:49:56.895485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.633 [2024-12-15 12:49:56.895491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.633 [2024-12-15 12:49:56.895495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.633 [2024-12-15 12:49:56.896944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.633 [2024-12-15 12:49:56.897051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.633 [2024-12-15 12:49:56.897158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.633 [2024-12-15 12:49:56.897158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.633 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.633 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:49.633 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:49.633 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.633 12:49:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.633 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.633 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.633 [2024-12-15 12:49:57.186391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.633 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.633 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:49.633 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.892 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:49.892 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.151 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.151 12:49:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.410 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:50.410 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:50.410 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.669 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:50.669 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.928 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:50.928 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.188 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:51.188 12:49:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:51.188 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.447 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.447 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.705 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.705 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:51.963 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.963 [2024-12-15 12:49:59.825417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.963 12:49:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:52.222 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:52.522 12:50:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:53.458 12:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:55.995 12:50:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:55.995 [global] 00:09:55.995 thread=1 00:09:55.995 invalidate=1 00:09:55.995 rw=write 00:09:55.995 time_based=1 00:09:55.995 runtime=1 00:09:55.995 ioengine=libaio 00:09:55.995 direct=1 00:09:55.995 bs=4096 00:09:55.995 iodepth=1 00:09:55.995 norandommap=0 00:09:55.995 numjobs=1 00:09:55.995 00:09:55.995 verify_dump=1 00:09:55.995 verify_backlog=512 00:09:55.995 verify_state_save=0 00:09:55.995 do_verify=1 00:09:55.995 verify=crc32c-intel 00:09:55.995 [job0] 00:09:55.995 filename=/dev/nvme0n1 00:09:55.995 [job1] 00:09:55.995 filename=/dev/nvme0n2 00:09:55.995 [job2] 00:09:55.995 filename=/dev/nvme0n3 00:09:55.995 [job3] 00:09:55.995 filename=/dev/nvme0n4 00:09:55.995 Could not set queue depth (nvme0n1) 00:09:55.995 Could not set queue depth (nvme0n2) 00:09:55.995 Could not set queue depth (nvme0n3) 00:09:55.995 Could not set queue depth (nvme0n4) 00:09:55.995 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.995 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.995 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.995 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.995 fio-3.35 00:09:55.995 Starting 4 threads 00:09:57.373 00:09:57.373 job0: (groupid=0, jobs=1): err= 0: pid=855706: Sun Dec 15 12:50:04 2024 00:09:57.373 read: IOPS=111, BW=446KiB/s (456kB/s)(456KiB/1023msec) 00:09:57.373 slat (nsec): min=7276, max=38955, avg=11238.66, stdev=6616.31 00:09:57.373 clat (usec): min=190, max=41964, avg=8135.21, stdev=16203.20 00:09:57.373 lat (usec): min=197, max=41989, avg=8146.44, stdev=16209.06 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 233], 00:09:57.373 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:09:57.373 | 70.00th=[ 262], 80.00th=[ 322], 90.00th=[41157], 95.00th=[41157], 00:09:57.373 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:57.373 | 99.99th=[42206] 00:09:57.373 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:57.373 slat (nsec): min=10129, max=39391, avg=11992.20, stdev=1844.73 00:09:57.373 clat (usec): min=130, max=282, avg=167.24, stdev=24.23 00:09:57.373 lat (usec): min=141, max=293, avg=179.23, stdev=24.60 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:57.373 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:09:57.373 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 239], 00:09:57.373 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 285], 99.95th=[ 285], 00:09:57.373 | 99.99th=[ 285] 00:09:57.373 bw ( KiB/s): min= 4096, max= 4096, per=23.45%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.373 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.373 lat (usec) : 250=90.26%, 500=6.23% 00:09:57.373 lat (msec) : 50=3.51% 00:09:57.373 cpu : usr=0.98%, sys=0.59%, ctx=630, majf=0, minf=1 00:09:57.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 issued rwts: total=114,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.373 job1: (groupid=0, jobs=1): err= 0: pid=855707: Sun Dec 15 12:50:04 2024 00:09:57.373 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:09:57.373 slat (nsec): min=9832, max=24243, avg=19126.14, stdev=4248.91 00:09:57.373 clat (usec): min=40810, max=41077, avg=40974.45, stdev=59.23 00:09:57.373 lat (usec): min=40833, max=41098, avg=40993.57, stdev=58.85 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:57.373 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:57.373 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:57.373 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:57.373 | 99.99th=[41157] 00:09:57.373 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:57.373 slat (nsec): min=10721, max=41807, avg=12453.85, stdev=2514.72 00:09:57.373 clat (usec): min=135, max=320, avg=184.57, stdev=16.93 00:09:57.373 lat (usec): min=146, max=360, avg=197.02, stdev=17.32 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:09:57.373 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:57.373 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 208], 00:09:57.373 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 322], 99.95th=[ 322], 00:09:57.373 | 99.99th=[ 322] 00:09:57.373 bw ( KiB/s): min= 4096, max= 4096, per=23.45%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.373 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.373 lat (usec) : 250=95.51%, 500=0.37% 00:09:57.373 lat (msec) : 50=4.12% 00:09:57.373 cpu : usr=1.00%, sys=0.40%, ctx=534, majf=0, minf=2 00:09:57.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.373 job2: (groupid=0, jobs=1): err= 0: pid=855708: Sun Dec 15 12:50:04 2024 00:09:57.373 read: IOPS=33, BW=134KiB/s (137kB/s)(136KiB/1013msec) 00:09:57.373 slat (nsec): min=7479, max=29574, avg=17768.88, stdev=7737.64 00:09:57.373 clat (usec): min=212, max=42023, avg=26739.69, stdev=19856.31 00:09:57.373 lat (usec): min=220, max=42046, avg=26757.46, stdev=19863.46 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 249], 00:09:57.373 | 30.00th=[ 302], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:57.373 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:57.373 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:57.373 | 99.99th=[42206] 00:09:57.373 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:57.373 slat (nsec): min=10095, max=39983, avg=11206.87, stdev=1876.86 00:09:57.373 clat (usec): min=137, max=287, avg=186.93, stdev=16.95 00:09:57.373 lat (usec): min=149, max=326, avg=198.14, stdev=17.31 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 174], 00:09:57.373 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 192], 00:09:57.373 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 212], 00:09:57.373 | 99.00th=[ 225], 99.50th=[ 243], 99.90th=[ 289], 99.95th=[ 289], 00:09:57.373 | 99.99th=[ 289] 00:09:57.373 bw ( KiB/s): min= 4096, max= 4096, per=23.45%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.373 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.373 lat (usec) : 250=94.69%, 500=1.28% 00:09:57.373 lat (msec) : 50=4.03% 00:09:57.373 cpu : usr=0.30%, sys=0.49%, ctx=547, majf=0, minf=1 00:09:57.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.373 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.373 job3: (groupid=0, jobs=1): err= 0: pid=855709: Sun Dec 15 12:50:04 2024 00:09:57.373 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:57.373 slat (nsec): min=6817, max=34486, avg=7936.77, stdev=1486.23 00:09:57.373 clat (usec): min=162, max=976, avg=201.02, stdev=33.27 00:09:57.373 lat (usec): min=170, max=983, avg=208.96, stdev=33.55 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:57.373 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:57.373 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 239], 95.00th=[ 255], 00:09:57.373 | 99.00th=[ 273], 99.50th=[ 367], 99.90th=[ 510], 99.95th=[ 660], 00:09:57.373 | 99.99th=[ 979] 00:09:57.373 write: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 00:09:57.373 slat (nsec): min=9634, max=38955, avg=11136.53, stdev=1328.22 00:09:57.373 clat (usec): min=115, max=474, avg=143.30, stdev=19.46 00:09:57.373 lat (usec): min=126, max=513, avg=154.44, stdev=19.63 00:09:57.373 clat percentiles (usec): 00:09:57.373 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:09:57.373 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:09:57.373 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 176], 00:09:57.373 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 277], 99.95th=[ 310], 00:09:57.373 | 99.99th=[ 474] 00:09:57.374 bw ( KiB/s): min=12288, max=12288, per=70.35%, avg=12288.00, stdev= 0.00, samples=1 00:09:57.374 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:57.374 lat (usec) : 250=96.76%, 500=3.15%, 750=0.07%, 1000=0.02% 00:09:57.374 cpu : usr=2.80%, sys=5.50%, ctx=5492, majf=0, minf=1 00:09:57.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.374 issued rwts: total=2560,2931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.374 00:09:57.374 Run status group 0 (all jobs): 00:09:57.374 READ: bw=10.4MiB/s (10.9MB/s), 87.6KiB/s-9.99MiB/s (89.7kB/s-10.5MB/s), io=10.7MiB (11.2MB), run=1001-1023msec 00:09:57.374 WRITE: bw=17.1MiB/s (17.9MB/s), 2002KiB/s-11.4MiB/s (2050kB/s-12.0MB/s), io=17.4MiB (18.3MB), run=1001-1023msec 00:09:57.374 00:09:57.374 Disk stats (read/write): 00:09:57.374 nvme0n1: ios=161/512, merge=0/0, ticks=1025/81, in_queue=1106, util=98.20% 00:09:57.374 nvme0n2: ios=36/512, merge=0/0, ticks=869/88, in_queue=957, util=91.25% 00:09:57.374 nvme0n3: ios=54/512, merge=0/0, ticks=1731/86, in_queue=1817, util=98.65% 00:09:57.374 nvme0n4: ios=2165/2560, merge=0/0, ticks=706/363, in_queue=1069, util=98.64% 00:09:57.374 12:50:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:57.374 [global] 00:09:57.374 thread=1 00:09:57.374 invalidate=1 00:09:57.374 rw=randwrite 00:09:57.374 time_based=1 00:09:57.374 runtime=1 00:09:57.374 ioengine=libaio 00:09:57.374 direct=1 00:09:57.374 bs=4096 00:09:57.374 iodepth=1 00:09:57.374 norandommap=0 00:09:57.374 numjobs=1 00:09:57.374 00:09:57.374 verify_dump=1 00:09:57.374 verify_backlog=512 00:09:57.374 verify_state_save=0 00:09:57.374 do_verify=1 00:09:57.374 verify=crc32c-intel 00:09:57.374 [job0] 00:09:57.374 filename=/dev/nvme0n1 00:09:57.374 [job1] 00:09:57.374 filename=/dev/nvme0n2 00:09:57.374 [job2] 00:09:57.374 filename=/dev/nvme0n3 00:09:57.374 [job3] 00:09:57.374 filename=/dev/nvme0n4 00:09:57.374 Could not set queue depth (nvme0n1) 00:09:57.374 Could not set queue depth (nvme0n2) 00:09:57.374 Could not set queue depth (nvme0n3) 00:09:57.374 Could not set queue depth (nvme0n4) 00:09:57.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.632 fio-3.35 00:09:57.632 Starting 4 threads 00:09:59.010 00:09:59.010 job0: (groupid=0, jobs=1): err= 0: pid=856073: Sun Dec 15 12:50:06 2024 00:09:59.010 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:09:59.010 slat (nsec): min=8601, max=23818, avg=22062.13, stdev=4155.55 00:09:59.010 clat (usec): min=237, max=42109, avg=39666.11, stdev=8610.94 00:09:59.010 lat (usec): min=260, max=42132, avg=39688.17, stdev=8610.73 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 237], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:59.010 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:59.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:59.010 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.010 | 99.99th=[42206] 00:09:59.010 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:09:59.010 slat (nsec): min=8950, max=36570, avg=10245.44, stdev=1914.96 00:09:59.010 clat (usec): min=112, max=294, avg=216.59, stdev=41.79 00:09:59.010 lat (usec): min=122, max=330, avg=226.84, stdev=41.99 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 118], 5.00th=[ 130], 10.00th=[ 143], 20.00th=[ 176], 00:09:59.010 | 30.00th=[ 200], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:09:59.010 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 249], 00:09:59.010 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 293], 00:09:59.010 | 99.99th=[ 293] 00:09:59.010 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.010 lat (usec) : 250=91.40%, 500=4.49% 00:09:59.010 lat (msec) : 50=4.11% 00:09:59.010 cpu : usr=0.39%, sys=0.39%, ctx=538, majf=0, minf=1 00:09:59.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.010 job1: (groupid=0, jobs=1): err= 0: pid=856074: Sun Dec 15 12:50:06 2024 00:09:59.010 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:09:59.010 slat (nsec): min=12855, max=24669, avg=21951.14, stdev=2759.95 00:09:59.010 clat (usec): min=40916, max=42012, avg=41445.98, stdev=487.21 00:09:59.010 lat (usec): min=40939, max=42035, avg=41467.93, stdev=487.58 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:59.010 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:59.010 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:59.010 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.010 | 99.99th=[42206] 00:09:59.010 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:59.010 slat (nsec): min=9511, max=41832, avg=11186.00, stdev=2036.08 00:09:59.010 clat (usec): min=127, max=331, avg=158.92, stdev=15.90 00:09:59.010 lat (usec): min=138, max=344, avg=170.11, stdev=16.43 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:59.010 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:09:59.010 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:09:59.010 | 99.00th=[ 206], 99.50th=[ 233], 99.90th=[ 330], 99.95th=[ 330], 00:09:59.010 | 99.99th=[ 330] 00:09:59.010 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.010 lat (usec) : 250=95.69%, 500=0.19% 00:09:59.010 lat (msec) : 50=4.12% 00:09:59.010 cpu : usr=0.00%, sys=0.80%, ctx=535, majf=0, minf=1 00:09:59.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.010 job2: (groupid=0, jobs=1): err= 0: pid=856076: Sun Dec 15 12:50:06 2024 00:09:59.010 read: IOPS=25, BW=102KiB/s (105kB/s)(104KiB/1016msec) 00:09:59.010 slat (nsec): min=9208, max=23792, avg=20514.19, stdev=5512.18 00:09:59.010 clat (usec): min=241, max=42044, avg=34888.47, stdev=15068.33 00:09:59.010 lat (usec): min=264, max=42067, avg=34908.98, stdev=15067.14 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[40633], 00:09:59.010 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.010 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:59.010 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.010 | 99.99th=[42206] 00:09:59.010 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:09:59.010 slat (nsec): min=9543, max=42350, avg=10701.07, stdev=1740.99 00:09:59.010 clat (usec): min=128, max=327, avg=196.18, stdev=32.67 00:09:59.010 lat (usec): min=138, max=369, avg=206.88, stdev=32.97 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 165], 00:09:59.010 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 202], 00:09:59.010 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 249], 00:09:59.010 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 326], 99.95th=[ 326], 00:09:59.010 | 99.99th=[ 326] 00:09:59.010 bw ( KiB/s): min= 4096, max= 4096, per=25.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.010 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.010 lat (usec) : 250=91.26%, 500=4.65% 00:09:59.010 lat (msec) : 50=4.09% 00:09:59.010 cpu : usr=0.49%, sys=0.30%, ctx=539, majf=0, minf=1 00:09:59.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.010 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.010 job3: (groupid=0, jobs=1): err= 0: pid=856077: Sun Dec 15 12:50:06 2024 00:09:59.010 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:59.010 slat (nsec): min=6689, max=27898, avg=7514.32, stdev=932.86 00:09:59.010 clat (usec): min=170, max=645, avg=216.54, stdev=25.31 00:09:59.010 lat (usec): min=177, max=652, avg=224.05, stdev=25.41 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:09:59.010 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:09:59.010 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 260], 00:09:59.010 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 445], 99.95th=[ 445], 00:09:59.010 | 99.99th=[ 644] 00:09:59.010 write: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:09:59.010 slat (nsec): min=9284, max=37565, avg=10404.38, stdev=1225.34 00:09:59.010 clat (usec): min=115, max=298, avg=145.48, stdev=19.06 00:09:59.010 lat (usec): min=125, max=336, avg=155.89, stdev=19.20 00:09:59.010 clat percentiles (usec): 00:09:59.010 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:09:59.010 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:09:59.010 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 186], 00:09:59.011 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 225], 99.95th=[ 258], 00:09:59.011 | 99.99th=[ 297] 00:09:59.011 bw ( KiB/s): min=12288, max=12288, per=75.50%, avg=12288.00, stdev= 0.00, samples=1 00:09:59.011 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:59.011 lat (usec) : 250=94.73%, 500=5.25%, 750=0.02% 00:09:59.011 cpu : usr=2.20%, sys=5.20%, ctx=5222, majf=0, minf=1 00:09:59.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.011 issued rwts: total=2560,2659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.011 00:09:59.011 Run status group 0 (all jobs): 00:09:59.011 READ: bw=9.97MiB/s (10.5MB/s), 87.8KiB/s-9.99MiB/s (89.9kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1031msec 00:09:59.011 WRITE: bw=15.9MiB/s (16.7MB/s), 1986KiB/s-10.4MiB/s (2034kB/s-10.9MB/s), io=16.4MiB (17.2MB), run=1001-1031msec 00:09:59.011 00:09:59.011 Disk stats (read/write): 00:09:59.011 nvme0n1: ios=43/512, merge=0/0, ticks=1087/107, in_queue=1194, util=99.10% 00:09:59.011 nvme0n2: ios=52/512, merge=0/0, ticks=1656/79, in_queue=1735, util=100.00% 00:09:59.011 nvme0n3: ios=48/512, merge=0/0, ticks=1734/99, in_queue=1833, util=98.55% 00:09:59.011 nvme0n4: ios=2107/2560, merge=0/0, ticks=822/362, in_queue=1184, util=99.06% 00:09:59.011 12:50:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.011 [global] 00:09:59.011 thread=1 00:09:59.011 invalidate=1 00:09:59.011 rw=write 00:09:59.011 time_based=1 00:09:59.011 runtime=1 00:09:59.011 ioengine=libaio 00:09:59.011 direct=1 00:09:59.011 bs=4096 00:09:59.011 iodepth=128 00:09:59.011 norandommap=0 00:09:59.011 numjobs=1 00:09:59.011 00:09:59.011 verify_dump=1 00:09:59.011 verify_backlog=512 00:09:59.011 verify_state_save=0 00:09:59.011 do_verify=1 00:09:59.011 verify=crc32c-intel 00:09:59.011 [job0] 00:09:59.011 filename=/dev/nvme0n1 00:09:59.011 [job1] 00:09:59.011 filename=/dev/nvme0n2 00:09:59.011 [job2] 00:09:59.011 filename=/dev/nvme0n3 00:09:59.011 [job3] 00:09:59.011 filename=/dev/nvme0n4 00:09:59.011 Could not set queue depth (nvme0n1) 00:09:59.011 Could not set queue depth (nvme0n2) 00:09:59.011 Could not set queue depth (nvme0n3) 00:09:59.011 Could not set queue depth (nvme0n4) 00:09:59.011 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.011 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.011 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.011 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.011 fio-3.35 00:09:59.011 Starting 4 threads 00:10:00.390 00:10:00.390 job0: (groupid=0, jobs=1): err= 0: pid=856471: Sun Dec 15 12:50:08 2024 00:10:00.390 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:10:00.390 slat (nsec): min=1500, max=15240k, avg=135348.28, stdev=985094.48 00:10:00.390 clat (usec): min=3848, max=58766, avg=16180.16, stdev=10438.02 00:10:00.390 lat (usec): min=3857, max=58776, avg=16315.51, stdev=10521.19 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 4883], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8848], 00:10:00.390 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11994], 60.00th=[16712], 00:10:00.390 | 70.00th=[17957], 80.00th=[20317], 90.00th=[27132], 95.00th=[39584], 00:10:00.390 | 99.00th=[55313], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:10:00.390 | 99.99th=[58983] 00:10:00.390 write: IOPS=3661, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1013msec); 0 zone resets 00:10:00.390 slat (usec): min=2, max=16935, avg=126.96, stdev=744.33 00:10:00.390 clat (usec): min=1335, max=74282, avg=18991.43, stdev=11049.40 00:10:00.390 lat (usec): min=1385, max=74293, avg=19118.39, stdev=11107.04 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 3523], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[13698], 00:10:00.390 | 30.00th=[15795], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:10:00.390 | 70.00th=[17957], 80.00th=[19530], 90.00th=[30802], 95.00th=[43254], 00:10:00.390 | 99.00th=[68682], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:10:00.390 | 99.99th=[73925] 00:10:00.390 bw ( KiB/s): min=12344, max=16376, per=23.14%, avg=14360.00, stdev=2851.05, samples=2 00:10:00.390 iops : min= 3086, max= 4094, avg=3590.00, stdev=712.76, samples=2 00:10:00.390 lat (msec) : 2=0.18%, 4=0.84%, 10=25.13%, 20=54.23%, 50=16.59% 00:10:00.390 lat (msec) : 100=3.03% 00:10:00.390 cpu : usr=3.06%, sys=5.24%, ctx=390, majf=0, minf=1 00:10:00.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:00.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.390 issued rwts: total=3584,3709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.390 job1: (groupid=0, jobs=1): err= 0: pid=856488: Sun Dec 15 12:50:08 2024 00:10:00.390 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:10:00.390 slat (nsec): min=1326, max=14598k, avg=98103.17, stdev=715955.75 00:10:00.390 clat (usec): min=4820, max=33654, avg=11964.77, stdev=4232.20 00:10:00.390 lat (usec): min=4833, max=33684, avg=12062.87, stdev=4292.06 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8717], 00:10:00.390 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[11469], 00:10:00.390 | 70.00th=[13829], 80.00th=[15926], 90.00th=[18220], 95.00th=[21103], 00:10:00.390 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24249], 99.95th=[24249], 00:10:00.390 | 99.99th=[33817] 00:10:00.390 write: IOPS=3950, BW=15.4MiB/s (16.2MB/s)(15.7MiB/1016msec); 0 zone resets 00:10:00.390 slat (usec): min=2, max=27869, avg=155.22, stdev=935.81 00:10:00.390 clat (usec): min=2936, max=90525, avg=21318.84, stdev=16704.42 00:10:00.390 lat (usec): min=2948, max=90538, avg=21474.06, stdev=16794.37 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 5407], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 8160], 00:10:00.390 | 30.00th=[11731], 40.00th=[16581], 50.00th=[16909], 60.00th=[17433], 00:10:00.390 | 70.00th=[19006], 80.00th=[29754], 90.00th=[42730], 95.00th=[60556], 00:10:00.390 | 99.00th=[84411], 99.50th=[84411], 99.90th=[90702], 99.95th=[90702], 00:10:00.390 | 99.99th=[90702] 00:10:00.390 bw ( KiB/s): min=13360, max=17736, per=25.06%, avg=15548.00, stdev=3094.30, samples=2 00:10:00.390 iops : min= 3340, max= 4434, avg=3887.00, stdev=773.57, samples=2 00:10:00.390 lat (msec) : 4=0.32%, 10=37.51%, 20=44.33%, 50=14.58%, 100=3.26% 00:10:00.390 cpu : usr=2.56%, sys=5.91%, ctx=368, majf=0, minf=1 00:10:00.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:00.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.390 issued rwts: total=3584,4014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.390 job2: (groupid=0, jobs=1): err= 0: pid=856507: Sun Dec 15 12:50:08 2024 00:10:00.390 read: IOPS=3527, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1016msec) 00:10:00.390 slat (usec): min=2, max=18809, avg=155.51, stdev=1041.04 00:10:00.390 clat (usec): min=4460, max=90324, avg=16476.19, stdev=13942.72 00:10:00.390 lat (usec): min=4467, max=90334, avg=16631.70, stdev=14075.74 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9896], 00:10:00.390 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:10:00.390 | 70.00th=[16188], 80.00th=[20055], 90.00th=[25822], 95.00th=[48497], 00:10:00.390 | 99.00th=[81265], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:10:00.390 | 99.99th=[90702] 00:10:00.390 write: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(15.4MiB/1016msec); 0 zone resets 00:10:00.390 slat (usec): min=3, max=14678, avg=105.00, stdev=637.69 00:10:00.390 clat (usec): min=2275, max=90286, avg=17669.89, stdev=12961.68 00:10:00.390 lat (usec): min=2287, max=90290, avg=17774.89, stdev=13014.86 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 3785], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[ 9372], 00:10:00.390 | 30.00th=[ 9765], 40.00th=[14746], 50.00th=[16450], 60.00th=[16909], 00:10:00.390 | 70.00th=[17171], 80.00th=[18482], 90.00th=[30540], 95.00th=[44303], 00:10:00.390 | 99.00th=[78119], 99.50th=[81265], 99.90th=[89654], 99.95th=[90702], 00:10:00.390 | 99.99th=[90702] 00:10:00.390 bw ( KiB/s): min=14136, max=16384, per=24.59%, avg=15260.00, stdev=1589.58, samples=2 00:10:00.390 iops : min= 3534, max= 4096, avg=3815.00, stdev=397.39, samples=2 00:10:00.390 lat (msec) : 4=0.80%, 10=28.55%, 20=52.59%, 50=14.16%, 100=3.89% 00:10:00.390 cpu : usr=4.04%, sys=4.63%, ctx=359, majf=0, minf=1 00:10:00.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:00.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.390 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.390 job3: (groupid=0, jobs=1): err= 0: pid=856513: Sun Dec 15 12:50:08 2024 00:10:00.390 read: IOPS=3931, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1008msec) 00:10:00.390 slat (nsec): min=1374, max=18035k, avg=132013.69, stdev=893969.20 00:10:00.390 clat (usec): min=4476, max=67934, avg=15641.33, stdev=8763.49 00:10:00.390 lat (usec): min=4982, max=67943, avg=15773.34, stdev=8843.04 00:10:00.390 clat percentiles (usec): 00:10:00.390 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9634], 00:10:00.390 | 30.00th=[10028], 40.00th=[10421], 50.00th=[14091], 60.00th=[16188], 00:10:00.390 | 70.00th=[16909], 80.00th=[17957], 90.00th=[23200], 95.00th=[29754], 00:10:00.390 | 99.00th=[60031], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:10:00.390 | 99.99th=[67634] 00:10:00.390 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:00.390 slat (usec): min=2, max=13795, avg=109.89, stdev=622.12 00:10:00.391 clat (usec): min=3197, max=67897, avg=16020.87, stdev=6617.98 00:10:00.391 lat (usec): min=3207, max=67902, avg=16130.76, stdev=6650.25 00:10:00.391 clat percentiles (usec): 00:10:00.391 | 1.00th=[ 4752], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:10:00.391 | 30.00th=[12780], 40.00th=[15533], 50.00th=[16581], 60.00th=[16909], 00:10:00.391 | 70.00th=[17171], 80.00th=[17957], 90.00th=[22938], 95.00th=[29230], 00:10:00.391 | 99.00th=[41157], 99.50th=[42730], 99.90th=[51119], 99.95th=[51119], 00:10:00.391 | 99.99th=[67634] 00:10:00.391 bw ( KiB/s): min=16384, max=16384, per=26.40%, avg=16384.00, stdev= 0.00, samples=2 00:10:00.391 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:00.391 lat (msec) : 4=0.42%, 10=25.95%, 20=59.06%, 50=13.70%, 100=0.87% 00:10:00.391 cpu : usr=2.98%, sys=6.16%, ctx=405, majf=0, minf=1 00:10:00.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:00.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.391 issued rwts: total=3963,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.391 00:10:00.391 Run status group 0 (all jobs): 00:10:00.391 READ: bw=56.6MiB/s (59.3MB/s), 13.8MiB/s-15.4MiB/s (14.4MB/s-16.1MB/s), io=57.5MiB (60.3MB), run=1008-1016msec 00:10:00.391 WRITE: bw=60.6MiB/s (63.5MB/s), 14.3MiB/s-15.9MiB/s (15.0MB/s-16.6MB/s), io=61.6MiB (64.6MB), run=1008-1016msec 00:10:00.391 00:10:00.391 Disk stats (read/write): 00:10:00.391 nvme0n1: ios=3124/3343, merge=0/0, ticks=43340/61373, in_queue=104713, util=97.70% 00:10:00.391 nvme0n2: ios=3105/3279, merge=0/0, ticks=36793/67950, in_queue=104743, util=98.07% 00:10:00.391 nvme0n3: ios=3130/3207, merge=0/0, ticks=49649/54892, in_queue=104541, util=98.12% 00:10:00.391 nvme0n4: ios=3130/3415, merge=0/0, ticks=49174/55214, in_queue=104388, util=98.11% 00:10:00.391 12:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:00.391 [global] 00:10:00.391 thread=1 00:10:00.391 invalidate=1 00:10:00.391 rw=randwrite 00:10:00.391 time_based=1 00:10:00.391 runtime=1 00:10:00.391 ioengine=libaio 00:10:00.391 direct=1 00:10:00.391 bs=4096 00:10:00.391 iodepth=128 00:10:00.391 norandommap=0 00:10:00.391 numjobs=1 00:10:00.391 00:10:00.391 verify_dump=1 00:10:00.391 verify_backlog=512 00:10:00.391 verify_state_save=0 00:10:00.391 do_verify=1 00:10:00.391 verify=crc32c-intel 00:10:00.391 [job0] 00:10:00.391 filename=/dev/nvme0n1 00:10:00.391 [job1] 00:10:00.391 filename=/dev/nvme0n2 00:10:00.391 [job2] 00:10:00.391 filename=/dev/nvme0n3 00:10:00.391 [job3] 00:10:00.391 filename=/dev/nvme0n4 00:10:00.391 Could not set queue depth (nvme0n1) 00:10:00.391 Could not set queue depth (nvme0n2) 00:10:00.391 Could not set queue depth (nvme0n3) 00:10:00.391 Could not set queue depth (nvme0n4) 00:10:00.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.651 fio-3.35 00:10:00.651 Starting 4 threads 00:10:02.029 00:10:02.029 job0: (groupid=0, jobs=1): err= 0: pid=856958: Sun Dec 15 12:50:09 2024 00:10:02.029 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:10:02.029 slat (nsec): min=1085, max=21552k, avg=96797.30, stdev=768101.16 00:10:02.029 clat (usec): min=2173, max=47223, avg=12767.40, stdev=6713.66 00:10:02.029 lat (usec): min=2179, max=47246, avg=12864.20, stdev=6776.52 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 4293], 5.00th=[ 5604], 10.00th=[ 7308], 20.00th=[ 8586], 00:10:02.029 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10945], 60.00th=[11994], 00:10:02.029 | 70.00th=[13698], 80.00th=[15401], 90.00th=[19530], 95.00th=[25822], 00:10:02.029 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:10:02.029 | 99.99th=[47449] 00:10:02.029 write: IOPS=5166, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1011msec); 0 zone resets 00:10:02.029 slat (nsec): min=1815, max=10663k, avg=84259.13, stdev=530284.00 00:10:02.029 clat (usec): min=445, max=45616, avg=11972.01, stdev=6464.66 00:10:02.029 lat (usec): min=454, max=45622, avg=12056.27, stdev=6497.86 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 3130], 5.00th=[ 4686], 10.00th=[ 6980], 20.00th=[ 8029], 00:10:02.029 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:10:02.029 | 70.00th=[12387], 80.00th=[16057], 90.00th=[18744], 95.00th=[26870], 00:10:02.029 | 99.00th=[38536], 99.50th=[42730], 99.90th=[45351], 99.95th=[45876], 00:10:02.029 | 99.99th=[45876] 00:10:02.029 bw ( KiB/s): min=16384, max=24576, per=30.02%, avg=20480.00, stdev=5792.62, samples=2 00:10:02.029 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:02.029 lat (usec) : 500=0.03%, 750=0.07%, 1000=0.08% 00:10:02.029 lat (msec) : 2=0.10%, 4=1.61%, 10=41.94%, 20=47.90%, 50=8.28% 00:10:02.029 cpu : usr=3.47%, sys=5.35%, ctx=496, majf=0, minf=1 00:10:02.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:02.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.029 issued rwts: total=5120,5223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.029 job1: (groupid=0, jobs=1): err= 0: pid=856973: Sun Dec 15 12:50:09 2024 00:10:02.029 read: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec) 00:10:02.029 slat (nsec): min=1008, max=26431k, avg=108774.48, stdev=894747.85 00:10:02.029 clat (usec): min=4651, max=53028, avg=15090.87, stdev=7098.09 00:10:02.029 lat (usec): min=4657, max=56627, avg=15199.65, stdev=7156.22 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[ 9765], 00:10:02.029 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11863], 60.00th=[14091], 00:10:02.029 | 70.00th=[16450], 80.00th=[20055], 90.00th=[23200], 95.00th=[32113], 00:10:02.029 | 99.00th=[39584], 99.50th=[39584], 99.90th=[53216], 99.95th=[53216], 00:10:02.029 | 99.99th=[53216] 00:10:02.029 write: IOPS=3897, BW=15.2MiB/s (16.0MB/s)(16.0MiB/1051msec); 0 zone resets 00:10:02.029 slat (nsec): min=1882, max=18167k, avg=133767.23, stdev=832387.89 00:10:02.029 clat (usec): min=1183, max=102251, avg=19326.82, stdev=18708.98 00:10:02.029 lat (usec): min=1190, max=102257, avg=19460.59, stdev=18814.51 00:10:02.029 clat percentiles (msec): 00:10:02.029 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:10:02.029 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:10:02.029 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 41], 95.00th=[ 68], 00:10:02.029 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:10:02.029 | 99.99th=[ 103] 00:10:02.029 bw ( KiB/s): min=11328, max=20480, per=23.31%, avg=15904.00, stdev=6471.44, samples=2 00:10:02.029 iops : min= 2832, max= 5120, avg=3976.00, stdev=1617.86, samples=2 00:10:02.029 lat (msec) : 2=0.26%, 4=0.65%, 10=31.07%, 20=44.18%, 50=20.23% 00:10:02.029 lat (msec) : 100=3.33%, 250=0.29% 00:10:02.029 cpu : usr=3.24%, sys=3.90%, ctx=371, majf=0, minf=2 00:10:02.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:02.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.029 issued rwts: total=3591,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.029 job2: (groupid=0, jobs=1): err= 0: pid=856994: Sun Dec 15 12:50:09 2024 00:10:02.029 read: IOPS=3857, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1006msec) 00:10:02.029 slat (nsec): min=1063, max=54952k, avg=133527.63, stdev=1334913.67 00:10:02.029 clat (usec): min=2515, max=84087, avg=17059.22, stdev=14278.56 00:10:02.029 lat (usec): min=2525, max=84111, avg=17192.75, stdev=14357.92 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 5211], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[ 9896], 00:10:02.029 | 30.00th=[11076], 40.00th=[12125], 50.00th=[12780], 60.00th=[13698], 00:10:02.029 | 70.00th=[15270], 80.00th=[17171], 90.00th=[26084], 95.00th=[56361], 00:10:02.029 | 99.00th=[72877], 99.50th=[73925], 99.90th=[73925], 99.95th=[81265], 00:10:02.029 | 99.99th=[84411] 00:10:02.029 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:02.029 slat (nsec): min=1987, max=11835k, avg=109731.10, stdev=658635.18 00:10:02.029 clat (usec): min=746, max=51203, avg=14950.85, stdev=8441.82 00:10:02.029 lat (usec): min=754, max=51213, avg=15060.58, stdev=8488.17 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6980], 20.00th=[ 9896], 00:10:02.029 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11863], 60.00th=[14091], 00:10:02.029 | 70.00th=[17171], 80.00th=[19530], 90.00th=[21890], 95.00th=[30802], 00:10:02.029 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:10:02.029 | 99.99th=[51119] 00:10:02.029 bw ( KiB/s): min=16384, max=16384, per=24.02%, avg=16384.00, stdev= 0.00, samples=2 00:10:02.029 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:02.029 lat (usec) : 750=0.05%, 1000=0.01% 00:10:02.029 lat (msec) : 4=0.53%, 10=20.02%, 20=62.99%, 50=12.12%, 100=4.27% 00:10:02.029 cpu : usr=2.69%, sys=3.78%, ctx=323, majf=0, minf=1 00:10:02.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:02.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.029 issued rwts: total=3881,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.029 job3: (groupid=0, jobs=1): err= 0: pid=857000: Sun Dec 15 12:50:09 2024 00:10:02.029 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:10:02.029 slat (nsec): min=1320, max=15002k, avg=90201.98, stdev=768821.64 00:10:02.029 clat (usec): min=3414, max=57122, avg=13532.25, stdev=5865.90 00:10:02.029 lat (usec): min=3420, max=57167, avg=13622.45, stdev=5920.44 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 6915], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9765], 00:10:02.029 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[12256], 00:10:02.029 | 70.00th=[13960], 80.00th=[16712], 90.00th=[21890], 95.00th=[25035], 00:10:02.029 | 99.00th=[32113], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:10:02.029 | 99.99th=[56886] 00:10:02.029 write: IOPS=4469, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1009msec); 0 zone resets 00:10:02.029 slat (usec): min=2, max=15517, avg=98.41, stdev=712.95 00:10:02.029 clat (usec): min=560, max=97537, avg=16139.27, stdev=11586.43 00:10:02.029 lat (usec): min=586, max=97541, avg=16237.68, stdev=11629.62 00:10:02.029 clat percentiles (usec): 00:10:02.029 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 7308], 20.00th=[ 9241], 00:10:02.029 | 30.00th=[10159], 40.00th=[10683], 50.00th=[12649], 60.00th=[15270], 00:10:02.029 | 70.00th=[17171], 80.00th=[20317], 90.00th=[30016], 95.00th=[39060], 00:10:02.029 | 99.00th=[67634], 99.50th=[80217], 99.90th=[96994], 99.95th=[96994], 00:10:02.029 | 99.99th=[98042] 00:10:02.029 bw ( KiB/s): min=14680, max=20384, per=25.70%, avg=17532.00, stdev=4033.34, samples=2 00:10:02.029 iops : min= 3670, max= 5096, avg=4383.00, stdev=1008.33, samples=2 00:10:02.029 lat (usec) : 750=0.02% 00:10:02.029 lat (msec) : 4=1.68%, 10=23.31%, 20=56.45%, 50=17.56%, 100=0.98% 00:10:02.029 cpu : usr=3.27%, sys=5.16%, ctx=350, majf=0, minf=1 00:10:02.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.029 issued rwts: total=4096,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.029 00:10:02.029 Run status group 0 (all jobs): 00:10:02.029 READ: bw=62.0MiB/s (65.0MB/s), 13.3MiB/s-19.8MiB/s (14.0MB/s-20.7MB/s), io=65.2MiB (68.4MB), run=1006-1051msec 00:10:02.029 WRITE: bw=66.6MiB/s (69.9MB/s), 15.2MiB/s-20.2MiB/s (16.0MB/s-21.2MB/s), io=70.0MiB (73.4MB), run=1006-1051msec 00:10:02.029 00:10:02.029 Disk stats (read/write): 00:10:02.029 nvme0n1: ios=4509/4608, merge=0/0, ticks=32650/31982, in_queue=64632, util=98.70% 00:10:02.029 nvme0n2: ios=3348/3584, merge=0/0, ticks=40470/49377, in_queue=89847, util=87.31% 00:10:02.029 nvme0n3: ios=3072/3575, merge=0/0, ticks=35840/32315, in_queue=68155, util=88.65% 00:10:02.029 nvme0n4: ios=3584/3655, merge=0/0, ticks=46562/57052, in_queue=103614, util=89.71% 00:10:02.029 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:02.029 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=857121 00:10:02.029 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:02.030 12:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:02.030 [global] 00:10:02.030 thread=1 00:10:02.030 invalidate=1 00:10:02.030 rw=read 00:10:02.030 time_based=1 00:10:02.030 runtime=10 00:10:02.030 ioengine=libaio 00:10:02.030 direct=1 00:10:02.030 bs=4096 00:10:02.030 iodepth=1 00:10:02.030 norandommap=1 00:10:02.030 numjobs=1 00:10:02.030 00:10:02.030 [job0] 00:10:02.030 filename=/dev/nvme0n1 00:10:02.030 [job1] 00:10:02.030 filename=/dev/nvme0n2 00:10:02.030 [job2] 00:10:02.030 filename=/dev/nvme0n3 00:10:02.030 [job3] 00:10:02.030 filename=/dev/nvme0n4 00:10:02.030 Could not set queue depth (nvme0n1) 00:10:02.030 Could not set queue depth (nvme0n2) 00:10:02.030 Could not set queue depth (nvme0n3) 00:10:02.030 Could not set queue depth (nvme0n4) 00:10:02.288 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.288 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.288 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.288 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.288 fio-3.35 00:10:02.288 Starting 4 threads 00:10:04.822 12:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:05.082 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42053632, buflen=4096 00:10:05.082 fio: pid=857395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.082 12:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:05.341 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.341 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:05.341 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1515520, buflen=4096 00:10:05.341 fio: pid=857394, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.600 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53420032, buflen=4096 00:10:05.600 fio: pid=857392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.600 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.600 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:05.859 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45105152, buflen=4096 00:10:05.859 fio: pid=857393, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:05.859 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.859 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:05.859 00:10:05.859 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857392: Sun Dec 15 12:50:13 2024 00:10:05.859 read: IOPS=4204, BW=16.4MiB/s (17.2MB/s)(50.9MiB/3102msec) 00:10:05.859 slat (usec): min=3, max=27501, avg= 9.80, stdev=254.03 00:10:05.859 clat (usec): min=156, max=40623, avg=225.17, stdev=355.93 00:10:05.859 lat (usec): min=173, max=40630, avg=234.97, stdev=438.70 00:10:05.859 clat percentiles (usec): 00:10:05.859 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:05.859 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:10:05.859 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 269], 00:10:05.859 | 99.00th=[ 445], 99.50th=[ 486], 99.90th=[ 519], 99.95th=[ 537], 00:10:05.860 | 99.99th=[ 922] 00:10:05.860 bw ( KiB/s): min=15938, max=18440, per=40.46%, avg=16948.33, stdev=931.54, samples=6 00:10:05.860 iops : min= 3984, max= 4610, avg=4237.00, stdev=232.99, samples=6 00:10:05.860 lat (usec) : 250=86.54%, 500=13.15%, 750=0.28%, 1000=0.01% 00:10:05.860 lat (msec) : 50=0.01% 00:10:05.860 cpu : usr=1.16%, sys=3.61%, ctx=13046, majf=0, minf=1 00:10:05.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 issued rwts: total=13043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.860 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857393: Sun Dec 15 12:50:13 2024 00:10:05.860 read: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(43.0MiB/3313msec) 00:10:05.860 slat (usec): min=3, max=15594, avg=11.88, stdev=214.69 00:10:05.860 clat (usec): min=168, max=41255, avg=285.30, stdev=675.37 00:10:05.860 lat (usec): min=175, max=41259, avg=297.18, stdev=709.28 00:10:05.860 clat percentiles (usec): 00:10:05.860 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 231], 00:10:05.860 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:10:05.860 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 416], 95.00th=[ 486], 00:10:05.860 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 537], 99.95th=[ 627], 00:10:05.860 | 99.99th=[40633] 00:10:05.860 bw ( KiB/s): min=11072, max=16168, per=31.74%, avg=13294.00, stdev=2217.52, samples=6 00:10:05.860 iops : min= 2768, max= 4042, avg=3323.50, stdev=554.38, samples=6 00:10:05.860 lat (usec) : 250=47.99%, 500=49.81%, 750=2.14%, 1000=0.01% 00:10:05.860 lat (msec) : 4=0.01%, 50=0.03% 00:10:05.860 cpu : usr=0.63%, sys=3.56%, ctx=11020, majf=0, minf=2 00:10:05.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 issued rwts: total=11013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.860 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857394: Sun Dec 15 12:50:13 2024 00:10:05.860 read: IOPS=127, BW=508KiB/s (520kB/s)(1480KiB/2916msec) 00:10:05.860 slat (nsec): min=6385, max=31844, avg=10214.02, stdev=6320.33 00:10:05.860 clat (usec): min=200, max=42077, avg=7810.23, stdev=15894.30 00:10:05.860 lat (usec): min=207, max=42101, avg=7820.41, stdev=15900.21 00:10:05.860 clat percentiles (usec): 00:10:05.860 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:10:05.860 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 281], 00:10:05.860 | 70.00th=[ 306], 80.00th=[ 490], 90.00th=[41157], 95.00th=[41681], 00:10:05.860 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:05.860 | 99.99th=[42206] 00:10:05.860 bw ( KiB/s): min= 96, max= 1936, per=1.11%, avg=465.60, stdev=821.99, samples=5 00:10:05.860 iops : min= 24, max= 484, avg=116.40, stdev=205.50, samples=5 00:10:05.860 lat (usec) : 250=28.03%, 500=52.02%, 750=1.35% 00:10:05.860 lat (msec) : 50=18.33% 00:10:05.860 cpu : usr=0.07%, sys=0.14%, ctx=371, majf=0, minf=2 00:10:05.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.860 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857395: Sun Dec 15 12:50:13 2024 00:10:05.860 read: IOPS=3798, BW=14.8MiB/s (15.6MB/s)(40.1MiB/2703msec) 00:10:05.860 slat (nsec): min=6408, max=34247, avg=7595.64, stdev=1243.10 00:10:05.860 clat (usec): min=161, max=41380, avg=252.37, stdev=411.65 00:10:05.860 lat (usec): min=168, max=41388, avg=259.96, stdev=411.66 00:10:05.860 clat percentiles (usec): 00:10:05.860 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:05.860 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 241], 00:10:05.860 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 461], 00:10:05.860 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 529], 99.95th=[ 553], 00:10:05.860 | 99.99th=[ 660] 00:10:05.860 bw ( KiB/s): min=12760, max=16864, per=36.42%, avg=15254.40, stdev=1899.05, samples=5 00:10:05.860 iops : min= 3190, max= 4216, avg=3813.60, stdev=474.76, samples=5 00:10:05.860 lat (usec) : 250=62.29%, 500=37.11%, 750=0.58% 00:10:05.860 lat (msec) : 50=0.01% 00:10:05.860 cpu : usr=0.74%, sys=3.85%, ctx=10269, majf=0, minf=2 00:10:05.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.860 issued rwts: total=10268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.860 00:10:05.860 Run status group 0 (all jobs): 00:10:05.860 READ: bw=40.9MiB/s (42.9MB/s), 508KiB/s-16.4MiB/s (520kB/s-17.2MB/s), io=136MiB (142MB), run=2703-3313msec 00:10:05.860 00:10:05.860 Disk stats (read/write): 00:10:05.860 nvme0n1: ios=13042/0, merge=0/0, ticks=2869/0, in_queue=2869, util=94.05% 00:10:05.860 nvme0n2: ios=10295/0, merge=0/0, ticks=3132/0, in_queue=3132, util=98.58% 00:10:05.860 nvme0n3: ios=368/0, merge=0/0, ticks=2808/0, in_queue=2808, util=96.43% 00:10:05.860 nvme0n4: ios=9844/0, merge=0/0, ticks=2448/0, in_queue=2448, util=96.42% 00:10:05.860 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:05.860 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:06.120 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.120 12:50:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:06.379 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.379 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:06.638 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.638 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 857121 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:06.898 nvmf hotplug test: fio failed as expected 00:10:06.898 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.157 rmmod nvme_tcp 00:10:07.157 rmmod nvme_fabrics 00:10:07.157 rmmod nvme_keyring 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 854389 ']' 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 854389 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 854389 ']' 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 854389 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.157 12:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854389 00:10:07.157 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.157 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.157 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854389' 00:10:07.157 killing process with pid 854389 00:10:07.157 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 854389 00:10:07.157 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 854389 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.418 12:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.956 00:10:09.956 real 0m26.736s 00:10:09.956 user 1m46.644s 00:10:09.956 sys 0m8.572s 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.956 ************************************ 00:10:09.956 END TEST nvmf_fio_target 00:10:09.956 ************************************ 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.956 ************************************ 00:10:09.956 START TEST nvmf_bdevio 00:10:09.956 ************************************ 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.956 * Looking for test storage... 00:10:09.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:09.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.956 --rc genhtml_branch_coverage=1 00:10:09.956 --rc genhtml_function_coverage=1 00:10:09.956 --rc genhtml_legend=1 00:10:09.956 --rc geninfo_all_blocks=1 00:10:09.956 --rc geninfo_unexecuted_blocks=1 00:10:09.956 00:10:09.956 ' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:09.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.956 --rc genhtml_branch_coverage=1 00:10:09.956 --rc genhtml_function_coverage=1 00:10:09.956 --rc genhtml_legend=1 00:10:09.956 --rc geninfo_all_blocks=1 00:10:09.956 --rc geninfo_unexecuted_blocks=1 00:10:09.956 00:10:09.956 ' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:09.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.956 --rc genhtml_branch_coverage=1 00:10:09.956 --rc genhtml_function_coverage=1 00:10:09.956 --rc genhtml_legend=1 00:10:09.956 --rc geninfo_all_blocks=1 00:10:09.956 --rc geninfo_unexecuted_blocks=1 00:10:09.956 00:10:09.956 ' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:09.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.956 --rc genhtml_branch_coverage=1 00:10:09.956 --rc genhtml_function_coverage=1 00:10:09.956 --rc genhtml_legend=1 00:10:09.956 --rc geninfo_all_blocks=1 00:10:09.956 --rc geninfo_unexecuted_blocks=1 00:10:09.956 00:10:09.956 ' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.956 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.957 12:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:16.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.533 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:16.534 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:16.534 Found net devices under 0000:af:00.0: cvl_0_0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:16.534 Found net devices under 0000:af:00.1: cvl_0_1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:10:16.534 00:10:16.534 --- 10.0.0.2 ping statistics --- 00:10:16.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.534 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:10:16.534 00:10:16.534 --- 10.0.0.1 ping statistics --- 00:10:16.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.534 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=861677 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 861677 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 861677 ']' 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 [2024-12-15 12:50:23.637719] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:16.534 [2024-12-15 12:50:23.637765] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.534 [2024-12-15 12:50:23.714963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.534 [2024-12-15 12:50:23.737684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.534 [2024-12-15 12:50:23.737721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.534 [2024-12-15 12:50:23.737728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.534 [2024-12-15 12:50:23.737733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.534 [2024-12-15 12:50:23.737739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.534 [2024-12-15 12:50:23.739287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.534 [2024-12-15 12:50:23.739395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.534 [2024-12-15 12:50:23.739523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.534 [2024-12-15 12:50:23.739524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.534 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 [2024-12-15 12:50:23.871366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 Malloc0 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.535 [2024-12-15 12:50:23.936620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.535 { 00:10:16.535 "params": { 00:10:16.535 "name": "Nvme$subsystem", 00:10:16.535 "trtype": "$TEST_TRANSPORT", 00:10:16.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.535 "adrfam": "ipv4", 00:10:16.535 "trsvcid": "$NVMF_PORT", 00:10:16.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.535 "hdgst": ${hdgst:-false}, 00:10:16.535 "ddgst": ${ddgst:-false} 00:10:16.535 }, 00:10:16.535 "method": "bdev_nvme_attach_controller" 00:10:16.535 } 00:10:16.535 EOF 00:10:16.535 )") 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:16.535 12:50:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.535 "params": { 00:10:16.535 "name": "Nvme1", 00:10:16.535 "trtype": "tcp", 00:10:16.535 "traddr": "10.0.0.2", 00:10:16.535 "adrfam": "ipv4", 00:10:16.535 "trsvcid": "4420", 00:10:16.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.535 "hdgst": false, 00:10:16.535 "ddgst": false 00:10:16.535 }, 00:10:16.535 "method": "bdev_nvme_attach_controller" 00:10:16.535 }' 00:10:16.535 [2024-12-15 12:50:23.986579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:16.535 [2024-12-15 12:50:23.986624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861799 ] 00:10:16.535 [2024-12-15 12:50:24.059990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.535 [2024-12-15 12:50:24.084919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.535 [2024-12-15 12:50:24.085029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.535 [2024-12-15 12:50:24.085029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.535 I/O targets: 00:10:16.535 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.535 00:10:16.535 00:10:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.535 http://cunit.sourceforge.net/ 00:10:16.535 00:10:16.535 00:10:16.535 Suite: bdevio tests on: Nvme1n1 00:10:16.535 Test: blockdev write read block ...passed 00:10:16.794 Test: blockdev write zeroes read block ...passed 00:10:16.794 Test: blockdev write zeroes read no split ...passed 00:10:16.794 Test: blockdev write zeroes read split ...passed 00:10:16.794 Test: blockdev write zeroes read split partial ...passed 00:10:16.794 Test: blockdev reset ...[2024-12-15 12:50:24.472456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:16.794 [2024-12-15 12:50:24.472516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1990340 (9): Bad file descriptor 00:10:16.794 [2024-12-15 12:50:24.524515] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:16.794 passed 00:10:16.794 Test: blockdev write read 8 blocks ...passed 00:10:16.794 Test: blockdev write read size > 128k ...passed 00:10:16.794 Test: blockdev write read invalid size ...passed 00:10:16.794 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.794 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.794 Test: blockdev write read max offset ...passed 00:10:16.794 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.794 Test: blockdev writev readv 8 blocks ...passed 00:10:16.794 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.054 Test: blockdev writev readv block ...passed 00:10:17.054 Test: blockdev writev readv size > 128k ...passed 00:10:17.054 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.054 Test: blockdev comparev and writev ...[2024-12-15 12:50:24.736548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.736580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.736595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.736603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.736847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.736858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.736869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.736876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.737090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.737100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.737118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.737332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.737353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.054 [2024-12-15 12:50:24.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.054 passed 00:10:17.054 Test: blockdev nvme passthru rw ...passed 00:10:17.054 Test: blockdev nvme passthru vendor specific ...[2024-12-15 12:50:24.820257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.054 [2024-12-15 12:50:24.820275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.820404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.054 [2024-12-15 12:50:24.820413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.820519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.054 [2024-12-15 12:50:24.820528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.054 [2024-12-15 12:50:24.820632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.054 [2024-12-15 12:50:24.820641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.054 passed 00:10:17.054 Test: blockdev nvme admin passthru ...passed 00:10:17.054 Test: blockdev copy ...passed 00:10:17.054 00:10:17.054 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.054 suites 1 1 n/a 0 0 00:10:17.054 tests 23 23 23 0 0 00:10:17.054 asserts 152 152 152 0 n/a 00:10:17.054 00:10:17.054 Elapsed time = 1.045 seconds 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.314 rmmod nvme_tcp 00:10:17.314 rmmod nvme_fabrics 00:10:17.314 rmmod nvme_keyring 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 861677 ']' 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 861677 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 861677 ']' 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 861677 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861677 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861677' 00:10:17.314 killing process with pid 861677 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 861677 00:10:17.314 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 861677 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.573 12:50:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.481 12:50:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.740 00:10:19.740 real 0m10.060s 00:10:19.740 user 0m10.139s 00:10:19.740 sys 0m5.029s 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 ************************************ 00:10:19.740 END TEST nvmf_bdevio 00:10:19.740 ************************************ 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.740 00:10:19.740 real 4m33.761s 00:10:19.740 user 10m22.565s 00:10:19.740 sys 1m38.564s 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 ************************************ 00:10:19.740 END TEST nvmf_target_core 00:10:19.740 ************************************ 00:10:19.740 12:50:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.740 12:50:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.740 12:50:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.740 12:50:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.740 ************************************ 00:10:19.740 START TEST nvmf_target_extra 00:10:19.740 ************************************ 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.740 * Looking for test storage... 00:10:19.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.740 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.000 --rc genhtml_branch_coverage=1 00:10:20.000 --rc genhtml_function_coverage=1 00:10:20.000 --rc genhtml_legend=1 00:10:20.000 --rc geninfo_all_blocks=1 00:10:20.000 --rc geninfo_unexecuted_blocks=1 00:10:20.000 00:10:20.000 ' 00:10:20.000 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.000 --rc genhtml_branch_coverage=1 00:10:20.000 --rc genhtml_function_coverage=1 00:10:20.000 --rc genhtml_legend=1 00:10:20.000 --rc geninfo_all_blocks=1 00:10:20.000 --rc geninfo_unexecuted_blocks=1 00:10:20.000 00:10:20.000 ' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.001 --rc genhtml_branch_coverage=1 00:10:20.001 --rc genhtml_function_coverage=1 00:10:20.001 --rc genhtml_legend=1 00:10:20.001 --rc geninfo_all_blocks=1 00:10:20.001 --rc geninfo_unexecuted_blocks=1 00:10:20.001 00:10:20.001 ' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.001 --rc genhtml_branch_coverage=1 00:10:20.001 --rc genhtml_function_coverage=1 00:10:20.001 --rc genhtml_legend=1 00:10:20.001 --rc geninfo_all_blocks=1 00:10:20.001 --rc geninfo_unexecuted_blocks=1 00:10:20.001 00:10:20.001 ' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.001 ************************************ 00:10:20.001 START TEST nvmf_example 00:10:20.001 ************************************ 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.001 * Looking for test storage... 00:10:20.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.001 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.276 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.277 --rc genhtml_branch_coverage=1 00:10:20.277 --rc genhtml_function_coverage=1 00:10:20.277 --rc genhtml_legend=1 00:10:20.277 --rc geninfo_all_blocks=1 00:10:20.277 --rc geninfo_unexecuted_blocks=1 00:10:20.277 00:10:20.277 ' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.277 --rc genhtml_branch_coverage=1 00:10:20.277 --rc genhtml_function_coverage=1 00:10:20.277 --rc genhtml_legend=1 00:10:20.277 --rc geninfo_all_blocks=1 00:10:20.277 --rc geninfo_unexecuted_blocks=1 00:10:20.277 00:10:20.277 ' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.277 --rc genhtml_branch_coverage=1 00:10:20.277 --rc genhtml_function_coverage=1 00:10:20.277 --rc genhtml_legend=1 00:10:20.277 --rc geninfo_all_blocks=1 00:10:20.277 --rc geninfo_unexecuted_blocks=1 00:10:20.277 00:10:20.277 ' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.277 --rc genhtml_branch_coverage=1 00:10:20.277 --rc genhtml_function_coverage=1 00:10:20.277 --rc genhtml_legend=1 00:10:20.277 --rc geninfo_all_blocks=1 00:10:20.277 --rc geninfo_unexecuted_blocks=1 00:10:20.277 00:10:20.277 ' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.277 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.278 12:50:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:26.953 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:26.953 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:26.953 Found net devices under 0000:af:00.0: cvl_0_0 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:26.953 Found net devices under 0000:af:00.1: cvl_0_1 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.953 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:10:26.954 00:10:26.954 --- 10.0.0.2 ping statistics --- 00:10:26.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.954 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:26.954 00:10:26.954 --- 10.0.0.1 ping statistics --- 00:10:26.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.954 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=865566 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 865566 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 865566 ']' 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.954 12:50:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.954 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:27.213 12:50:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:37.195 Initializing NVMe Controllers 00:10:37.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.195 Initialization complete. Launching workers. 00:10:37.195 ======================================================== 00:10:37.195 Latency(us) 00:10:37.195 Device Information : IOPS MiB/s Average min max 00:10:37.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18346.44 71.67 3487.98 687.33 20031.83 00:10:37.195 ======================================================== 00:10:37.195 Total : 18346.44 71.67 3487.98 687.33 20031.83 00:10:37.195 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.195 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.195 rmmod nvme_tcp 00:10:37.195 rmmod nvme_fabrics 00:10:37.195 rmmod nvme_keyring 00:10:37.453 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 865566 ']' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 865566 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 865566 ']' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 865566 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865566 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865566' 00:10:37.454 killing process with pid 865566 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 865566 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 865566 00:10:37.454 nvmf threads initialize successfully 00:10:37.454 bdev subsystem init successfully 00:10:37.454 created a nvmf target service 00:10:37.454 create targets's poll groups done 00:10:37.454 all subsystems of target started 00:10:37.454 nvmf target is running 00:10:37.454 all subsystems of target stopped 00:10:37.454 destroy targets's poll groups done 00:10:37.454 destroyed the nvmf target service 00:10:37.454 bdev subsystem finish successfully 00:10:37.454 nvmf threads destroy successfully 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.454 12:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.993 00:10:39.993 real 0m19.700s 00:10:39.993 user 0m45.928s 00:10:39.993 sys 0m6.023s 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.993 ************************************ 00:10:39.993 END TEST nvmf_example 00:10:39.993 ************************************ 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:39.993 ************************************ 00:10:39.993 START TEST nvmf_filesystem 00:10:39.993 ************************************ 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:39.993 * Looking for test storage... 00:10:39.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.993 --rc genhtml_branch_coverage=1 00:10:39.993 --rc genhtml_function_coverage=1 00:10:39.993 --rc genhtml_legend=1 00:10:39.993 --rc geninfo_all_blocks=1 00:10:39.993 --rc geninfo_unexecuted_blocks=1 00:10:39.993 00:10:39.993 ' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.993 --rc genhtml_branch_coverage=1 00:10:39.993 --rc genhtml_function_coverage=1 00:10:39.993 --rc genhtml_legend=1 00:10:39.993 --rc geninfo_all_blocks=1 00:10:39.993 --rc geninfo_unexecuted_blocks=1 00:10:39.993 00:10:39.993 ' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.993 --rc genhtml_branch_coverage=1 00:10:39.993 --rc genhtml_function_coverage=1 00:10:39.993 --rc genhtml_legend=1 00:10:39.993 --rc geninfo_all_blocks=1 00:10:39.993 --rc geninfo_unexecuted_blocks=1 00:10:39.993 00:10:39.993 ' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.993 --rc genhtml_branch_coverage=1 00:10:39.993 --rc genhtml_function_coverage=1 00:10:39.993 --rc genhtml_legend=1 00:10:39.993 --rc geninfo_all_blocks=1 00:10:39.993 --rc geninfo_unexecuted_blocks=1 00:10:39.993 00:10:39.993 ' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:39.993 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:39.994 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:39.994 #define SPDK_CONFIG_H 00:10:39.994 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:39.994 #define SPDK_CONFIG_APPS 1 00:10:39.994 #define SPDK_CONFIG_ARCH native 00:10:39.994 #undef SPDK_CONFIG_ASAN 00:10:39.994 #undef SPDK_CONFIG_AVAHI 00:10:39.994 #undef SPDK_CONFIG_CET 00:10:39.994 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:39.994 #define SPDK_CONFIG_COVERAGE 1 00:10:39.994 #define SPDK_CONFIG_CROSS_PREFIX 00:10:39.994 #undef SPDK_CONFIG_CRYPTO 00:10:39.994 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:39.994 #undef SPDK_CONFIG_CUSTOMOCF 00:10:39.994 #undef SPDK_CONFIG_DAOS 00:10:39.994 #define SPDK_CONFIG_DAOS_DIR 00:10:39.994 #define SPDK_CONFIG_DEBUG 1 00:10:39.994 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:39.994 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:39.994 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:39.995 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:39.995 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:39.995 #undef SPDK_CONFIG_DPDK_UADK 00:10:39.995 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:39.995 #define SPDK_CONFIG_EXAMPLES 1 00:10:39.995 #undef SPDK_CONFIG_FC 00:10:39.995 #define SPDK_CONFIG_FC_PATH 00:10:39.995 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:39.995 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:39.995 #define SPDK_CONFIG_FSDEV 1 00:10:39.995 #undef SPDK_CONFIG_FUSE 00:10:39.995 #undef SPDK_CONFIG_FUZZER 00:10:39.995 #define SPDK_CONFIG_FUZZER_LIB 00:10:39.995 #undef SPDK_CONFIG_GOLANG 00:10:39.995 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:39.995 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:39.995 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:39.995 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:39.995 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:39.995 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:39.995 #undef SPDK_CONFIG_HAVE_LZ4 00:10:39.995 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:39.995 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:39.995 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:39.995 #define SPDK_CONFIG_IDXD 1 00:10:39.995 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:39.995 #undef SPDK_CONFIG_IPSEC_MB 00:10:39.995 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:39.995 #define SPDK_CONFIG_ISAL 1 00:10:39.995 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:39.995 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:39.995 #define SPDK_CONFIG_LIBDIR 00:10:39.995 #undef SPDK_CONFIG_LTO 00:10:39.995 #define SPDK_CONFIG_MAX_LCORES 128 00:10:39.995 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:39.995 #define SPDK_CONFIG_NVME_CUSE 1 00:10:39.995 #undef SPDK_CONFIG_OCF 00:10:39.995 #define SPDK_CONFIG_OCF_PATH 00:10:39.995 #define SPDK_CONFIG_OPENSSL_PATH 00:10:39.995 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:39.995 #define SPDK_CONFIG_PGO_DIR 00:10:39.995 #undef SPDK_CONFIG_PGO_USE 00:10:39.995 #define SPDK_CONFIG_PREFIX /usr/local 00:10:39.995 #undef SPDK_CONFIG_RAID5F 00:10:39.995 #undef SPDK_CONFIG_RBD 00:10:39.995 #define SPDK_CONFIG_RDMA 1 00:10:39.995 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:39.995 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:39.995 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:39.995 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:39.995 #define SPDK_CONFIG_SHARED 1 00:10:39.995 #undef SPDK_CONFIG_SMA 00:10:39.995 #define SPDK_CONFIG_TESTS 1 00:10:39.995 #undef SPDK_CONFIG_TSAN 00:10:39.995 #define SPDK_CONFIG_UBLK 1 00:10:39.995 #define SPDK_CONFIG_UBSAN 1 00:10:39.995 #undef SPDK_CONFIG_UNIT_TESTS 00:10:39.995 #undef SPDK_CONFIG_URING 00:10:39.995 #define SPDK_CONFIG_URING_PATH 00:10:39.995 #undef SPDK_CONFIG_URING_ZNS 00:10:39.995 #undef SPDK_CONFIG_USDT 00:10:39.995 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:39.995 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:39.995 #define SPDK_CONFIG_VFIO_USER 1 00:10:39.995 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:39.995 #define SPDK_CONFIG_VHOST 1 00:10:39.995 #define SPDK_CONFIG_VIRTIO 1 00:10:39.995 #undef SPDK_CONFIG_VTUNE 00:10:39.995 #define SPDK_CONFIG_VTUNE_DIR 00:10:39.995 #define SPDK_CONFIG_WERROR 1 00:10:39.995 #define SPDK_CONFIG_WPDK_DIR 00:10:39.995 #undef SPDK_CONFIG_XNVME 00:10:39.995 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:39.995 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.996 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 867906 ]] 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 867906 00:10:39.997 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.F8U2jB 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.F8U2jB/tests/target /tmp/spdk.F8U2jB 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88105140224 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7447265280 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775993856 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=208896 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:39.998 * Looking for test storage... 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88105140224 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9661857792 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.998 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.258 --rc genhtml_branch_coverage=1 00:10:40.258 --rc genhtml_function_coverage=1 00:10:40.258 --rc genhtml_legend=1 00:10:40.258 --rc geninfo_all_blocks=1 00:10:40.258 --rc geninfo_unexecuted_blocks=1 00:10:40.258 00:10:40.258 ' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.258 --rc genhtml_branch_coverage=1 00:10:40.258 --rc genhtml_function_coverage=1 00:10:40.258 --rc genhtml_legend=1 00:10:40.258 --rc geninfo_all_blocks=1 00:10:40.258 --rc geninfo_unexecuted_blocks=1 00:10:40.258 00:10:40.258 ' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.258 --rc genhtml_branch_coverage=1 00:10:40.258 --rc genhtml_function_coverage=1 00:10:40.258 --rc genhtml_legend=1 00:10:40.258 --rc geninfo_all_blocks=1 00:10:40.258 --rc geninfo_unexecuted_blocks=1 00:10:40.258 00:10:40.258 ' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.258 --rc genhtml_branch_coverage=1 00:10:40.258 --rc genhtml_function_coverage=1 00:10:40.258 --rc genhtml_legend=1 00:10:40.258 --rc geninfo_all_blocks=1 00:10:40.258 --rc geninfo_unexecuted_blocks=1 00:10:40.258 00:10:40.258 ' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.258 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.259 12:50:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:46.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:46.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.831 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:46.832 Found net devices under 0000:af:00.0: cvl_0_0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:46.832 Found net devices under 0000:af:00.1: cvl_0_1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:46.832 00:10:46.832 --- 10.0.0.2 ping statistics --- 00:10:46.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.832 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:10:46.832 00:10:46.832 --- 10.0.0.1 ping statistics --- 00:10:46.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.832 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:46.832 ************************************ 00:10:46.832 START TEST nvmf_filesystem_no_in_capsule 00:10:46.832 ************************************ 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=871069 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 871069 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 871069 ']' 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.832 12:50:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.832 [2024-12-15 12:50:54.019559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:46.832 [2024-12-15 12:50:54.019596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.832 [2024-12-15 12:50:54.098770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.832 [2024-12-15 12:50:54.121545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.832 [2024-12-15 12:50:54.121584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.832 [2024-12-15 12:50:54.121591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.832 [2024-12-15 12:50:54.121597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.832 [2024-12-15 12:50:54.121602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.832 [2024-12-15 12:50:54.123085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.832 [2024-12-15 12:50:54.123200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.832 [2024-12-15 12:50:54.123309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.832 [2024-12-15 12:50:54.123310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.832 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 [2024-12-15 12:50:54.251862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 [2024-12-15 12:50:54.415945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:46.833 { 00:10:46.833 "name": "Malloc1", 00:10:46.833 "aliases": [ 00:10:46.833 "73234143-4934-45b3-a274-faf4e63de343" 00:10:46.833 ], 00:10:46.833 "product_name": "Malloc disk", 00:10:46.833 "block_size": 512, 00:10:46.833 "num_blocks": 1048576, 00:10:46.833 "uuid": "73234143-4934-45b3-a274-faf4e63de343", 00:10:46.833 "assigned_rate_limits": { 00:10:46.833 "rw_ios_per_sec": 0, 00:10:46.833 "rw_mbytes_per_sec": 0, 00:10:46.833 "r_mbytes_per_sec": 0, 00:10:46.833 "w_mbytes_per_sec": 0 00:10:46.833 }, 00:10:46.833 "claimed": true, 00:10:46.833 "claim_type": "exclusive_write", 00:10:46.833 "zoned": false, 00:10:46.833 "supported_io_types": { 00:10:46.833 "read": true, 00:10:46.833 "write": true, 00:10:46.833 "unmap": true, 00:10:46.833 "flush": true, 00:10:46.833 "reset": true, 00:10:46.833 "nvme_admin": false, 00:10:46.833 "nvme_io": false, 00:10:46.833 "nvme_io_md": false, 00:10:46.833 "write_zeroes": true, 00:10:46.833 "zcopy": true, 00:10:46.833 "get_zone_info": false, 00:10:46.833 "zone_management": false, 00:10:46.833 "zone_append": false, 00:10:46.833 "compare": false, 00:10:46.833 "compare_and_write": false, 00:10:46.833 "abort": true, 00:10:46.833 "seek_hole": false, 00:10:46.833 "seek_data": false, 00:10:46.833 "copy": true, 00:10:46.833 "nvme_iov_md": false 00:10:46.833 }, 00:10:46.833 "memory_domains": [ 00:10:46.833 { 00:10:46.833 "dma_device_id": "system", 00:10:46.833 "dma_device_type": 1 00:10:46.833 }, 00:10:46.833 { 00:10:46.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.833 "dma_device_type": 2 00:10:46.833 } 00:10:46.833 ], 00:10:46.833 "driver_specific": {} 00:10:46.833 } 00:10:46.833 ]' 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:46.833 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.770 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.770 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.770 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.770 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.770 12:50:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.302 12:50:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.561 12:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.497 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.497 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.498 ************************************ 00:10:51.498 START TEST filesystem_ext4 00:10:51.498 ************************************ 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:51.498 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.498 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.756 Discarding device blocks: 0/522240 done 00:10:51.756 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:51.756 Filesystem UUID: 3ba19e35-6487-44cc-9ade-431a26bb9ef0 00:10:51.756 Superblock backups stored on blocks: 00:10:51.756 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:51.756 00:10:51.756 Allocating group tables: 0/64 done 00:10:51.756 Writing inode tables: 0/64 done 00:10:51.756 Creating journal (8192 blocks): done 00:10:51.756 Writing superblocks and filesystem accounting information: 0/64 done 00:10:51.756 00:10:51.756 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:51.757 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 871069 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.028 00:10:57.028 real 0m5.560s 00:10:57.028 user 0m0.025s 00:10:57.028 sys 0m0.072s 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.028 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:57.028 ************************************ 00:10:57.028 END TEST filesystem_ext4 00:10:57.028 ************************************ 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.287 ************************************ 00:10:57.287 START TEST filesystem_btrfs 00:10:57.287 ************************************ 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:57.287 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.287 btrfs-progs v6.8.1 00:10:57.287 See https://btrfs.readthedocs.io for more information. 00:10:57.287 00:10:57.287 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.287 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.287 this does not affect your deployments: 00:10:57.287 - DUP for metadata (-m dup) 00:10:57.287 - enabled no-holes (-O no-holes) 00:10:57.287 - enabled free-space-tree (-R free-space-tree) 00:10:57.287 00:10:57.287 Label: (null) 00:10:57.287 UUID: eaba6b27-02c6-4bf9-9e86-66e678490aa7 00:10:57.287 Node size: 16384 00:10:57.287 Sector size: 4096 (CPU page size: 4096) 00:10:57.287 Filesystem size: 510.00MiB 00:10:57.287 Block group profiles: 00:10:57.287 Data: single 8.00MiB 00:10:57.287 Metadata: DUP 32.00MiB 00:10:57.287 System: DUP 8.00MiB 00:10:57.287 SSD detected: yes 00:10:57.287 Zoned device: no 00:10:57.287 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.287 Checksum: crc32c 00:10:57.287 Number of devices: 1 00:10:57.287 Devices: 00:10:57.287 ID SIZE PATH 00:10:57.287 1 510.00MiB /dev/nvme0n1p1 00:10:57.287 00:10:57.287 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:57.287 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.223 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.223 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 871069 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.223 00:10:58.223 real 0m1.081s 00:10:58.223 user 0m0.023s 00:10:58.223 sys 0m0.120s 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.223 ************************************ 00:10:58.223 END TEST filesystem_btrfs 00:10:58.223 ************************************ 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.223 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.482 ************************************ 00:10:58.482 START TEST filesystem_xfs 00:10:58.482 ************************************ 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:58.482 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:58.482 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:58.482 = sectsz=512 attr=2, projid32bit=1 00:10:58.482 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.482 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.482 data = bsize=4096 blocks=130560, imaxpct=25 00:10:58.482 = sunit=0 swidth=0 blks 00:10:58.482 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.482 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.482 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:58.482 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:59.418 Discarding blocks...Done. 00:10:59.418 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:59.418 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 871069 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.391 00:11:01.391 real 0m2.975s 00:11:01.391 user 0m0.017s 00:11:01.391 sys 0m0.082s 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.391 ************************************ 00:11:01.391 END TEST filesystem_xfs 00:11:01.391 ************************************ 00:11:01.391 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 871069 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 871069 ']' 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 871069 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.651 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871069 00:11:01.910 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.910 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.910 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871069' 00:11:01.910 killing process with pid 871069 00:11:01.910 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 871069 00:11:01.910 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 871069 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:02.169 00:11:02.169 real 0m15.934s 00:11:02.169 user 1m2.730s 00:11:02.169 sys 0m1.361s 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.169 ************************************ 00:11:02.169 END TEST nvmf_filesystem_no_in_capsule 00:11:02.169 ************************************ 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.169 ************************************ 00:11:02.169 START TEST nvmf_filesystem_in_capsule 00:11:02.169 ************************************ 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=874325 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 874325 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.169 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 874325 ']' 00:11:02.170 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.170 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.170 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.170 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.170 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.170 [2024-12-15 12:51:10.037661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:02.170 [2024-12-15 12:51:10.037712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.429 [2024-12-15 12:51:10.119100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.429 [2024-12-15 12:51:10.141806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.429 [2024-12-15 12:51:10.141853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.429 [2024-12-15 12:51:10.141861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.429 [2024-12-15 12:51:10.141868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.429 [2024-12-15 12:51:10.141875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.429 [2024-12-15 12:51:10.143410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.429 [2024-12-15 12:51:10.143515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.429 [2024-12-15 12:51:10.143625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.429 [2024-12-15 12:51:10.143626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.429 [2024-12-15 12:51:10.283973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.429 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.688 Malloc1 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.688 [2024-12-15 12:51:10.448992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:02.688 { 00:11:02.688 "name": "Malloc1", 00:11:02.688 "aliases": [ 00:11:02.688 "41310771-88e3-4293-a04f-afebd6836c50" 00:11:02.688 ], 00:11:02.688 "product_name": "Malloc disk", 00:11:02.688 "block_size": 512, 00:11:02.688 "num_blocks": 1048576, 00:11:02.688 "uuid": "41310771-88e3-4293-a04f-afebd6836c50", 00:11:02.688 "assigned_rate_limits": { 00:11:02.688 "rw_ios_per_sec": 0, 00:11:02.688 "rw_mbytes_per_sec": 0, 00:11:02.688 "r_mbytes_per_sec": 0, 00:11:02.688 "w_mbytes_per_sec": 0 00:11:02.688 }, 00:11:02.688 "claimed": true, 00:11:02.688 "claim_type": "exclusive_write", 00:11:02.688 "zoned": false, 00:11:02.688 "supported_io_types": { 00:11:02.688 "read": true, 00:11:02.688 "write": true, 00:11:02.688 "unmap": true, 00:11:02.688 "flush": true, 00:11:02.688 "reset": true, 00:11:02.688 "nvme_admin": false, 00:11:02.688 "nvme_io": false, 00:11:02.688 "nvme_io_md": false, 00:11:02.688 "write_zeroes": true, 00:11:02.688 "zcopy": true, 00:11:02.688 "get_zone_info": false, 00:11:02.688 "zone_management": false, 00:11:02.688 "zone_append": false, 00:11:02.688 "compare": false, 00:11:02.688 "compare_and_write": false, 00:11:02.688 "abort": true, 00:11:02.688 "seek_hole": false, 00:11:02.688 "seek_data": false, 00:11:02.688 "copy": true, 00:11:02.688 "nvme_iov_md": false 00:11:02.688 }, 00:11:02.688 "memory_domains": [ 00:11:02.688 { 00:11:02.688 "dma_device_id": "system", 00:11:02.688 "dma_device_type": 1 00:11:02.688 }, 00:11:02.688 { 00:11:02.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.688 "dma_device_type": 2 00:11:02.688 } 00:11:02.688 ], 00:11:02.688 "driver_specific": {} 00:11:02.688 } 00:11:02.688 ]' 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:02.688 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:02.689 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.689 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.066 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.066 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.066 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.066 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.066 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:05.968 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:05.969 12:51:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.227 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.486 12:51:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.421 ************************************ 00:11:07.421 START TEST filesystem_in_capsule_ext4 00:11:07.421 ************************************ 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:07.421 12:51:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:07.679 mke2fs 1.47.0 (5-Feb-2023) 00:11:07.679 Discarding device blocks: 0/522240 done 00:11:07.679 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:07.679 Filesystem UUID: b69ffb5e-18f6-46ad-b8cb-d9216a9f116c 00:11:07.679 Superblock backups stored on blocks: 00:11:07.679 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:07.679 00:11:07.679 Allocating group tables: 0/64 done 00:11:07.679 Writing inode tables: 0/64 done 00:11:10.968 Creating journal (8192 blocks): done 00:11:10.968 Writing superblocks and filesystem accounting information: 0/64 done 00:11:10.968 00:11:10.968 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:10.968 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 874325 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.314 00:11:16.314 real 0m8.584s 00:11:16.314 user 0m0.027s 00:11:16.314 sys 0m0.072s 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:16.314 ************************************ 00:11:16.314 END TEST filesystem_in_capsule_ext4 00:11:16.314 ************************************ 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:16.314 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 ************************************ 00:11:16.315 START TEST filesystem_in_capsule_btrfs 00:11:16.315 ************************************ 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.315 12:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:16.315 btrfs-progs v6.8.1 00:11:16.315 See https://btrfs.readthedocs.io for more information. 00:11:16.315 00:11:16.315 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:16.315 NOTE: several default settings have changed in version 5.15, please make sure 00:11:16.315 this does not affect your deployments: 00:11:16.315 - DUP for metadata (-m dup) 00:11:16.315 - enabled no-holes (-O no-holes) 00:11:16.315 - enabled free-space-tree (-R free-space-tree) 00:11:16.315 00:11:16.315 Label: (null) 00:11:16.315 UUID: 911252f1-d37e-46fe-96c3-63a7893a46f5 00:11:16.315 Node size: 16384 00:11:16.315 Sector size: 4096 (CPU page size: 4096) 00:11:16.315 Filesystem size: 510.00MiB 00:11:16.315 Block group profiles: 00:11:16.315 Data: single 8.00MiB 00:11:16.315 Metadata: DUP 32.00MiB 00:11:16.315 System: DUP 8.00MiB 00:11:16.315 SSD detected: yes 00:11:16.315 Zoned device: no 00:11:16.315 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:16.315 Checksum: crc32c 00:11:16.315 Number of devices: 1 00:11:16.315 Devices: 00:11:16.315 ID SIZE PATH 00:11:16.315 1 510.00MiB /dev/nvme0n1p1 00:11:16.315 00:11:16.315 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:16.315 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.641 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.641 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:16.641 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.641 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:16.641 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 874325 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.642 00:11:16.642 real 0m0.496s 00:11:16.642 user 0m0.029s 00:11:16.642 sys 0m0.110s 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.642 ************************************ 00:11:16.642 END TEST filesystem_in_capsule_btrfs 00:11:16.642 ************************************ 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.642 ************************************ 00:11:16.642 START TEST filesystem_in_capsule_xfs 00:11:16.642 ************************************ 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.642 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:16.901 12:51:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:16.901 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:16.901 = sectsz=512 attr=2, projid32bit=1 00:11:16.901 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:16.901 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:16.901 data = bsize=4096 blocks=130560, imaxpct=25 00:11:16.901 = sunit=0 swidth=0 blks 00:11:16.901 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:16.901 log =internal log bsize=4096 blocks=16384, version=2 00:11:16.901 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:16.901 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:17.836 Discarding blocks...Done. 00:11:17.836 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:17.836 12:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 874325 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:19.213 00:11:19.213 real 0m2.558s 00:11:19.213 user 0m0.028s 00:11:19.213 sys 0m0.071s 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.213 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:19.213 ************************************ 00:11:19.213 END TEST filesystem_in_capsule_xfs 00:11:19.213 ************************************ 00:11:19.472 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:19.472 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:19.472 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 874325 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 874325 ']' 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 874325 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874325 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874325' 00:11:19.731 killing process with pid 874325 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 874325 00:11:19.731 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 874325 00:11:19.990 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:19.990 00:11:19.990 real 0m17.819s 00:11:19.990 user 1m10.140s 00:11:19.990 sys 0m1.448s 00:11:19.990 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.990 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.990 ************************************ 00:11:19.990 END TEST nvmf_filesystem_in_capsule 00:11:19.990 ************************************ 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.991 rmmod nvme_tcp 00:11:19.991 rmmod nvme_fabrics 00:11:19.991 rmmod nvme_keyring 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.991 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.249 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.250 12:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.155 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.155 00:11:22.155 real 0m42.445s 00:11:22.155 user 2m14.989s 00:11:22.155 sys 0m7.398s 00:11:22.155 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.155 12:51:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.155 ************************************ 00:11:22.155 END TEST nvmf_filesystem 00:11:22.155 ************************************ 00:11:22.155 12:51:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:22.155 12:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.155 12:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.155 12:51:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.155 ************************************ 00:11:22.155 START TEST nvmf_target_discovery 00:11:22.155 ************************************ 00:11:22.155 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:22.414 * Looking for test storage... 00:11:22.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.414 --rc genhtml_branch_coverage=1 00:11:22.414 --rc genhtml_function_coverage=1 00:11:22.414 --rc genhtml_legend=1 00:11:22.414 --rc geninfo_all_blocks=1 00:11:22.414 --rc geninfo_unexecuted_blocks=1 00:11:22.414 00:11:22.414 ' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.414 --rc genhtml_branch_coverage=1 00:11:22.414 --rc genhtml_function_coverage=1 00:11:22.414 --rc genhtml_legend=1 00:11:22.414 --rc geninfo_all_blocks=1 00:11:22.414 --rc geninfo_unexecuted_blocks=1 00:11:22.414 00:11:22.414 ' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.414 --rc genhtml_branch_coverage=1 00:11:22.414 --rc genhtml_function_coverage=1 00:11:22.414 --rc genhtml_legend=1 00:11:22.414 --rc geninfo_all_blocks=1 00:11:22.414 --rc geninfo_unexecuted_blocks=1 00:11:22.414 00:11:22.414 ' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.414 --rc genhtml_branch_coverage=1 00:11:22.414 --rc genhtml_function_coverage=1 00:11:22.414 --rc genhtml_legend=1 00:11:22.414 --rc geninfo_all_blocks=1 00:11:22.414 --rc geninfo_unexecuted_blocks=1 00:11:22.414 00:11:22.414 ' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.414 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.415 12:51:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.983 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:28.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:28.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:28.984 Found net devices under 0000:af:00.0: cvl_0_0 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:28.984 Found net devices under 0000:af:00.1: cvl_0_1 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.984 12:51:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:28.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:11:28.984 00:11:28.984 --- 10.0.0.2 ping statistics --- 00:11:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.984 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:11:28.984 00:11:28.984 --- 10.0.0.1 ping statistics --- 00:11:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.984 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=880943 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 880943 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 880943 ']' 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.984 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.984 [2024-12-15 12:51:36.236969] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:28.984 [2024-12-15 12:51:36.237021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.985 [2024-12-15 12:51:36.318331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.985 [2024-12-15 12:51:36.341900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.985 [2024-12-15 12:51:36.341938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.985 [2024-12-15 12:51:36.341945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.985 [2024-12-15 12:51:36.341951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.985 [2024-12-15 12:51:36.341956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.985 [2024-12-15 12:51:36.343346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.985 [2024-12-15 12:51:36.343458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.985 [2024-12-15 12:51:36.343541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.985 [2024-12-15 12:51:36.343540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 [2024-12-15 12:51:36.476359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 Null1 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 [2024-12-15 12:51:36.534948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 Null2 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 Null3 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 Null4 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.985 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:28.985 00:11:28.985 Discovery Log Number of Records 6, Generation counter 6 00:11:28.985 =====Discovery Log Entry 0====== 00:11:28.985 trtype: tcp 00:11:28.985 adrfam: ipv4 00:11:28.985 subtype: current discovery subsystem 00:11:28.985 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4420 00:11:28.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: explicit discovery connections, duplicate discovery information 00:11:28.986 sectype: none 00:11:28.986 =====Discovery Log Entry 1====== 00:11:28.986 trtype: tcp 00:11:28.986 adrfam: ipv4 00:11:28.986 subtype: nvme subsystem 00:11:28.986 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4420 00:11:28.986 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: none 00:11:28.986 sectype: none 00:11:28.986 =====Discovery Log Entry 2====== 00:11:28.986 trtype: tcp 00:11:28.986 adrfam: ipv4 00:11:28.986 subtype: nvme subsystem 00:11:28.986 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4420 00:11:28.986 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: none 00:11:28.986 sectype: none 00:11:28.986 =====Discovery Log Entry 3====== 00:11:28.986 trtype: tcp 00:11:28.986 adrfam: ipv4 00:11:28.986 subtype: nvme subsystem 00:11:28.986 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4420 00:11:28.986 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: none 00:11:28.986 sectype: none 00:11:28.986 =====Discovery Log Entry 4====== 00:11:28.986 trtype: tcp 00:11:28.986 adrfam: ipv4 00:11:28.986 subtype: nvme subsystem 00:11:28.986 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4420 00:11:28.986 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: none 00:11:28.986 sectype: none 00:11:28.986 =====Discovery Log Entry 5====== 00:11:28.986 trtype: tcp 00:11:28.986 adrfam: ipv4 00:11:28.986 subtype: discovery subsystem referral 00:11:28.986 treq: not required 00:11:28.986 portid: 0 00:11:28.986 trsvcid: 4430 00:11:28.986 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.986 traddr: 10.0.0.2 00:11:28.986 eflags: none 00:11:28.986 sectype: none 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.986 Perform nvmf subsystem discovery via RPC 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 [ 00:11:28.986 { 00:11:28.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.986 "subtype": "Discovery", 00:11:28.986 "listen_addresses": [ 00:11:28.986 { 00:11:28.986 "trtype": "TCP", 00:11:28.986 "adrfam": "IPv4", 00:11:28.986 "traddr": "10.0.0.2", 00:11:28.986 "trsvcid": "4420" 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "allow_any_host": true, 00:11:28.986 "hosts": [] 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.986 "subtype": "NVMe", 00:11:28.986 "listen_addresses": [ 00:11:28.986 { 00:11:28.986 "trtype": "TCP", 00:11:28.986 "adrfam": "IPv4", 00:11:28.986 "traddr": "10.0.0.2", 00:11:28.986 "trsvcid": "4420" 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "allow_any_host": true, 00:11:28.986 "hosts": [], 00:11:28.986 "serial_number": "SPDK00000000000001", 00:11:28.986 "model_number": "SPDK bdev Controller", 00:11:28.986 "max_namespaces": 32, 00:11:28.986 "min_cntlid": 1, 00:11:28.986 "max_cntlid": 65519, 00:11:28.986 "namespaces": [ 00:11:28.986 { 00:11:28.986 "nsid": 1, 00:11:28.986 "bdev_name": "Null1", 00:11:28.986 "name": "Null1", 00:11:28.986 "nguid": "8C76CC2DB73543A6841B91A9EC97BAC9", 00:11:28.986 "uuid": "8c76cc2d-b735-43a6-841b-91a9ec97bac9" 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.986 "subtype": "NVMe", 00:11:28.986 "listen_addresses": [ 00:11:28.986 { 00:11:28.986 "trtype": "TCP", 00:11:28.986 "adrfam": "IPv4", 00:11:28.986 "traddr": "10.0.0.2", 00:11:28.986 "trsvcid": "4420" 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "allow_any_host": true, 00:11:28.986 "hosts": [], 00:11:28.986 "serial_number": "SPDK00000000000002", 00:11:28.986 "model_number": "SPDK bdev Controller", 00:11:28.986 "max_namespaces": 32, 00:11:28.986 "min_cntlid": 1, 00:11:28.986 "max_cntlid": 65519, 00:11:28.986 "namespaces": [ 00:11:28.986 { 00:11:28.986 "nsid": 1, 00:11:28.986 "bdev_name": "Null2", 00:11:28.986 "name": "Null2", 00:11:28.986 "nguid": "9824753A67544D80839DB4102200B0E3", 00:11:28.986 "uuid": "9824753a-6754-4d80-839d-b4102200b0e3" 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.986 "subtype": "NVMe", 00:11:28.986 "listen_addresses": [ 00:11:28.986 { 00:11:28.986 "trtype": "TCP", 00:11:28.986 "adrfam": "IPv4", 00:11:28.986 "traddr": "10.0.0.2", 00:11:28.986 "trsvcid": "4420" 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "allow_any_host": true, 00:11:28.986 "hosts": [], 00:11:28.986 "serial_number": "SPDK00000000000003", 00:11:28.986 "model_number": "SPDK bdev Controller", 00:11:28.986 "max_namespaces": 32, 00:11:28.986 "min_cntlid": 1, 00:11:28.986 "max_cntlid": 65519, 00:11:28.986 "namespaces": [ 00:11:28.986 { 00:11:28.986 "nsid": 1, 00:11:28.986 "bdev_name": "Null3", 00:11:28.986 "name": "Null3", 00:11:28.986 "nguid": "0E4D83C8EC8C4183B564DFE320CAEA08", 00:11:28.986 "uuid": "0e4d83c8-ec8c-4183-b564-dfe320caea08" 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 }, 00:11:28.986 { 00:11:28.986 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.986 "subtype": "NVMe", 00:11:28.986 "listen_addresses": [ 00:11:28.986 { 00:11:28.986 "trtype": "TCP", 00:11:28.986 "adrfam": "IPv4", 00:11:28.986 "traddr": "10.0.0.2", 00:11:28.986 "trsvcid": "4420" 00:11:28.986 } 00:11:28.986 ], 00:11:28.986 "allow_any_host": true, 00:11:28.986 "hosts": [], 00:11:28.986 "serial_number": "SPDK00000000000004", 00:11:28.986 "model_number": "SPDK bdev Controller", 00:11:28.986 "max_namespaces": 32, 00:11:28.986 "min_cntlid": 1, 00:11:28.986 "max_cntlid": 65519, 00:11:28.986 "namespaces": [ 00:11:28.986 { 00:11:28.986 "nsid": 1, 00:11:28.986 "bdev_name": "Null4", 00:11:28.986 "name": "Null4", 00:11:28.986 "nguid": "539C67603103432AA30D27ADB78DAA34", 00:11:28.986 "uuid": "539c6760-3103-432a-a30d-27adb78daa34" 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 } 00:11:28.986 ] 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.986 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.246 12:51:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.246 rmmod nvme_tcp 00:11:29.246 rmmod nvme_fabrics 00:11:29.246 rmmod nvme_keyring 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 880943 ']' 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 880943 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 880943 ']' 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 880943 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880943 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880943' 00:11:29.246 killing process with pid 880943 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 880943 00:11:29.246 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 880943 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.508 12:51:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.044 00:11:32.044 real 0m9.284s 00:11:32.044 user 0m5.543s 00:11:32.044 sys 0m4.791s 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:32.044 ************************************ 00:11:32.044 END TEST nvmf_target_discovery 00:11:32.044 ************************************ 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.044 ************************************ 00:11:32.044 START TEST nvmf_referrals 00:11:32.044 ************************************ 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:32.044 * Looking for test storage... 00:11:32.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.044 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.045 --rc genhtml_branch_coverage=1 00:11:32.045 --rc genhtml_function_coverage=1 00:11:32.045 --rc genhtml_legend=1 00:11:32.045 --rc geninfo_all_blocks=1 00:11:32.045 --rc geninfo_unexecuted_blocks=1 00:11:32.045 00:11:32.045 ' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.045 --rc genhtml_branch_coverage=1 00:11:32.045 --rc genhtml_function_coverage=1 00:11:32.045 --rc genhtml_legend=1 00:11:32.045 --rc geninfo_all_blocks=1 00:11:32.045 --rc geninfo_unexecuted_blocks=1 00:11:32.045 00:11:32.045 ' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.045 --rc genhtml_branch_coverage=1 00:11:32.045 --rc genhtml_function_coverage=1 00:11:32.045 --rc genhtml_legend=1 00:11:32.045 --rc geninfo_all_blocks=1 00:11:32.045 --rc geninfo_unexecuted_blocks=1 00:11:32.045 00:11:32.045 ' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.045 --rc genhtml_branch_coverage=1 00:11:32.045 --rc genhtml_function_coverage=1 00:11:32.045 --rc genhtml_legend=1 00:11:32.045 --rc geninfo_all_blocks=1 00:11:32.045 --rc geninfo_unexecuted_blocks=1 00:11:32.045 00:11:32.045 ' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:32.045 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.046 12:51:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:38.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:38.618 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:38.618 Found net devices under 0000:af:00.0: cvl_0_0 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:38.618 Found net devices under 0000:af:00.1: cvl_0_1 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.618 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:11:38.619 00:11:38.619 --- 10.0.0.2 ping statistics --- 00:11:38.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.619 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:38.619 00:11:38.619 --- 10.0.0.1 ping statistics --- 00:11:38.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.619 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=884655 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 884655 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 884655 ']' 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 [2024-12-15 12:51:45.718758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:38.619 [2024-12-15 12:51:45.718803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.619 [2024-12-15 12:51:45.797001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.619 [2024-12-15 12:51:45.819803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.619 [2024-12-15 12:51:45.819846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.619 [2024-12-15 12:51:45.819853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.619 [2024-12-15 12:51:45.819859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.619 [2024-12-15 12:51:45.819865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.619 [2024-12-15 12:51:45.821326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.619 [2024-12-15 12:51:45.821434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.619 [2024-12-15 12:51:45.821543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.619 [2024-12-15 12:51:45.821544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 [2024-12-15 12:51:45.961703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 [2024-12-15 12:51:45.996020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:38.619 12:51:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.619 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.620 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.879 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.138 12:51:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.138 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:39.138 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.138 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:39.138 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.397 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:39.655 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.656 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.656 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.914 12:51:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.172 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.172 rmmod nvme_tcp 00:11:40.172 rmmod nvme_fabrics 00:11:40.172 rmmod nvme_keyring 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 884655 ']' 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 884655 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 884655 ']' 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 884655 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884655 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884655' 00:11:40.430 killing process with pid 884655 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 884655 00:11:40.430 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 884655 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.431 12:51:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.967 00:11:42.967 real 0m10.964s 00:11:42.967 user 0m12.433s 00:11:42.967 sys 0m5.213s 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.967 ************************************ 00:11:42.967 END TEST nvmf_referrals 00:11:42.967 ************************************ 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.967 ************************************ 00:11:42.967 START TEST nvmf_connect_disconnect 00:11:42.967 ************************************ 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.967 * Looking for test storage... 00:11:42.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.967 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.968 --rc genhtml_branch_coverage=1 00:11:42.968 --rc genhtml_function_coverage=1 00:11:42.968 --rc genhtml_legend=1 00:11:42.968 --rc geninfo_all_blocks=1 00:11:42.968 --rc geninfo_unexecuted_blocks=1 00:11:42.968 00:11:42.968 ' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.968 --rc genhtml_branch_coverage=1 00:11:42.968 --rc genhtml_function_coverage=1 00:11:42.968 --rc genhtml_legend=1 00:11:42.968 --rc geninfo_all_blocks=1 00:11:42.968 --rc geninfo_unexecuted_blocks=1 00:11:42.968 00:11:42.968 ' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.968 --rc genhtml_branch_coverage=1 00:11:42.968 --rc genhtml_function_coverage=1 00:11:42.968 --rc genhtml_legend=1 00:11:42.968 --rc geninfo_all_blocks=1 00:11:42.968 --rc geninfo_unexecuted_blocks=1 00:11:42.968 00:11:42.968 ' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.968 --rc genhtml_branch_coverage=1 00:11:42.968 --rc genhtml_function_coverage=1 00:11:42.968 --rc genhtml_legend=1 00:11:42.968 --rc geninfo_all_blocks=1 00:11:42.968 --rc geninfo_unexecuted_blocks=1 00:11:42.968 00:11:42.968 ' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.968 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:49.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:49.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:49.540 Found net devices under 0000:af:00.0: cvl_0_0 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:49.540 Found net devices under 0000:af:00.1: cvl_0_1 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.540 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:11:49.541 00:11:49.541 --- 10.0.0.2 ping statistics --- 00:11:49.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.541 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:11:49.541 00:11:49.541 --- 10.0.0.1 ping statistics --- 00:11:49.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.541 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=888662 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 888662 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 888662 ']' 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.541 12:51:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 [2024-12-15 12:51:56.798712] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:49.541 [2024-12-15 12:51:56.798763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.541 [2024-12-15 12:51:56.876001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.541 [2024-12-15 12:51:56.899453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.541 [2024-12-15 12:51:56.899493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.541 [2024-12-15 12:51:56.899499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.541 [2024-12-15 12:51:56.899505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.541 [2024-12-15 12:51:56.899510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.541 [2024-12-15 12:51:56.900934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.541 [2024-12-15 12:51:56.901042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.541 [2024-12-15 12:51:56.901153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.541 [2024-12-15 12:51:56.901154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 [2024-12-15 12:51:57.045584] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:49.541 [2024-12-15 12:51:57.109529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:49.541 12:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:52.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.189 rmmod nvme_tcp 00:15:40.189 rmmod nvme_fabrics 00:15:40.189 rmmod nvme_keyring 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 888662 ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 888662 ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888662' 00:15:40.189 killing process with pid 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 888662 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.189 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.095 00:15:42.095 real 3m59.514s 00:15:42.095 user 15m14.532s 00:15:42.095 sys 0m24.547s 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.095 ************************************ 00:15:42.095 END TEST nvmf_connect_disconnect 00:15:42.095 ************************************ 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.095 12:55:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.355 ************************************ 00:15:42.355 START TEST nvmf_multitarget 00:15:42.355 ************************************ 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.355 * Looking for test storage... 00:15:42.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.355 --rc genhtml_branch_coverage=1 00:15:42.355 --rc genhtml_function_coverage=1 00:15:42.355 --rc genhtml_legend=1 00:15:42.355 --rc geninfo_all_blocks=1 00:15:42.355 --rc geninfo_unexecuted_blocks=1 00:15:42.355 00:15:42.355 ' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.355 --rc genhtml_branch_coverage=1 00:15:42.355 --rc genhtml_function_coverage=1 00:15:42.355 --rc genhtml_legend=1 00:15:42.355 --rc geninfo_all_blocks=1 00:15:42.355 --rc geninfo_unexecuted_blocks=1 00:15:42.355 00:15:42.355 ' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.355 --rc genhtml_branch_coverage=1 00:15:42.355 --rc genhtml_function_coverage=1 00:15:42.355 --rc genhtml_legend=1 00:15:42.355 --rc geninfo_all_blocks=1 00:15:42.355 --rc geninfo_unexecuted_blocks=1 00:15:42.355 00:15:42.355 ' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.355 --rc genhtml_branch_coverage=1 00:15:42.355 --rc genhtml_function_coverage=1 00:15:42.355 --rc genhtml_legend=1 00:15:42.355 --rc geninfo_all_blocks=1 00:15:42.355 --rc geninfo_unexecuted_blocks=1 00:15:42.355 00:15:42.355 ' 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.355 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:42.356 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:48.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:48.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:48.927 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:48.928 Found net devices under 0000:af:00.0: cvl_0_0 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:48.928 Found net devices under 0000:af:00.1: cvl_0_1 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.928 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:48.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:15:48.928 00:15:48.928 --- 10.0.0.2 ping statistics --- 00:15:48.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.928 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:15:48.928 00:15:48.928 --- 10.0.0.1 ping statistics --- 00:15:48.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.928 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=931296 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 931296 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 931296 ']' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.928 [2024-12-15 12:55:56.234696] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:48.928 [2024-12-15 12:55:56.234744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.928 [2024-12-15 12:55:56.316958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.928 [2024-12-15 12:55:56.340436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.928 [2024-12-15 12:55:56.340472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.928 [2024-12-15 12:55:56.340483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.928 [2024-12-15 12:55:56.340488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.928 [2024-12-15 12:55:56.340493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.928 [2024-12-15 12:55:56.341956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.928 [2024-12-15 12:55:56.342066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.928 [2024-12-15 12:55:56.342172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.928 [2024-12-15 12:55:56.342173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:48.928 "nvmf_tgt_1" 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:48.928 "nvmf_tgt_2" 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:48.928 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:49.188 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:49.188 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:49.188 true 00:15:49.188 12:55:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:49.188 true 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.447 rmmod nvme_tcp 00:15:49.447 rmmod nvme_fabrics 00:15:49.447 rmmod nvme_keyring 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 931296 ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 931296 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 931296 ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 931296 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 931296 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 931296' 00:15:49.447 killing process with pid 931296 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 931296 00:15:49.447 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 931296 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.706 12:55:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:52.242 00:15:52.242 real 0m9.528s 00:15:52.242 user 0m7.009s 00:15:52.242 sys 0m4.919s 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:52.242 ************************************ 00:15:52.242 END TEST nvmf_multitarget 00:15:52.242 ************************************ 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.242 ************************************ 00:15:52.242 START TEST nvmf_rpc 00:15:52.242 ************************************ 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.242 * Looking for test storage... 00:15:52.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:52.242 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.243 --rc genhtml_branch_coverage=1 00:15:52.243 --rc genhtml_function_coverage=1 00:15:52.243 --rc genhtml_legend=1 00:15:52.243 --rc geninfo_all_blocks=1 00:15:52.243 --rc geninfo_unexecuted_blocks=1 00:15:52.243 00:15:52.243 ' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.243 --rc genhtml_branch_coverage=1 00:15:52.243 --rc genhtml_function_coverage=1 00:15:52.243 --rc genhtml_legend=1 00:15:52.243 --rc geninfo_all_blocks=1 00:15:52.243 --rc geninfo_unexecuted_blocks=1 00:15:52.243 00:15:52.243 ' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.243 --rc genhtml_branch_coverage=1 00:15:52.243 --rc genhtml_function_coverage=1 00:15:52.243 --rc genhtml_legend=1 00:15:52.243 --rc geninfo_all_blocks=1 00:15:52.243 --rc geninfo_unexecuted_blocks=1 00:15:52.243 00:15:52.243 ' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.243 --rc genhtml_branch_coverage=1 00:15:52.243 --rc genhtml_function_coverage=1 00:15:52.243 --rc genhtml_legend=1 00:15:52.243 --rc geninfo_all_blocks=1 00:15:52.243 --rc geninfo_unexecuted_blocks=1 00:15:52.243 00:15:52.243 ' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:52.243 12:55:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:57.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:57.518 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.518 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:57.776 Found net devices under 0000:af:00.0: cvl_0_0 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:57.776 Found net devices under 0000:af:00.1: cvl_0_1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:58.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:15:58.036 00:15:58.036 --- 10.0.0.2 ping statistics --- 00:15:58.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.036 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:15:58.036 00:15:58.036 --- 10.0.0.1 ping statistics --- 00:15:58.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.036 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=935011 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 935011 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 935011 ']' 00:15:58.036 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.037 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.037 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.037 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.037 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.037 [2024-12-15 12:56:05.808294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:58.037 [2024-12-15 12:56:05.808343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.037 [2024-12-15 12:56:05.887044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.037 [2024-12-15 12:56:05.909388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.037 [2024-12-15 12:56:05.909429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.037 [2024-12-15 12:56:05.909437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.037 [2024-12-15 12:56:05.909443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.037 [2024-12-15 12:56:05.909448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.037 [2024-12-15 12:56:05.910925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.037 [2024-12-15 12:56:05.911034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.037 [2024-12-15 12:56:05.911138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.037 [2024-12-15 12:56:05.911139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:58.296 "tick_rate": 2100000000, 00:15:58.296 "poll_groups": [ 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_000", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_001", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_002", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_003", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [] 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 }' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 [2024-12-15 12:56:06.159817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:58.296 "tick_rate": 2100000000, 00:15:58.296 "poll_groups": [ 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_000", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [ 00:15:58.296 { 00:15:58.296 "trtype": "TCP" 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_001", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [ 00:15:58.296 { 00:15:58.296 "trtype": "TCP" 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_002", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [ 00:15:58.296 { 00:15:58.296 "trtype": "TCP" 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 }, 00:15:58.296 { 00:15:58.296 "name": "nvmf_tgt_poll_group_003", 00:15:58.296 "admin_qpairs": 0, 00:15:58.296 "io_qpairs": 0, 00:15:58.296 "current_admin_qpairs": 0, 00:15:58.296 "current_io_qpairs": 0, 00:15:58.296 "pending_bdev_io": 0, 00:15:58.296 "completed_nvme_io": 0, 00:15:58.296 "transports": [ 00:15:58.296 { 00:15:58.296 "trtype": "TCP" 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 } 00:15:58.296 ] 00:15:58.296 }' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:58.296 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:58.297 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:58.297 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 Malloc1 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 [2024-12-15 12:56:06.346561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:58.556 [2024-12-15 12:56:06.375082] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:15:58.556 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:58.556 could not add new controller: failed to write to nvme-fabrics device 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.556 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.933 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.933 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:59.933 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.933 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:59.933 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:01.838 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.098 [2024-12-15 12:56:09.788806] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:02.098 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:02.098 could not add new controller: failed to write to nvme-fabrics device 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.098 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.476 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.476 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.476 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.476 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:03.476 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:05.380 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 [2024-12-15 12:56:13.236919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.381 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.757 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:06.757 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:06.757 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.757 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:06.757 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:08.665 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.666 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 [2024-12-15 12:56:16.582008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.925 12:56:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.862 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.862 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:09.862 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.862 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:09.862 12:56:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.394 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 [2024-12-15 12:56:19.903786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.395 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.332 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:13.332 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:13.332 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.332 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:13.332 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.237 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 [2024-12-15 12:56:23.168251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.496 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.432 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.432 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.432 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.432 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.432 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:18.433 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 [2024-12-15 12:56:26.467888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.070 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.070 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.070 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.070 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.070 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 [2024-12-15 12:56:29.780681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.975 [2024-12-15 12:56:29.832809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.975 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.976 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 [2024-12-15 12:56:29.880942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 [2024-12-15 12:56:29.929112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 [2024-12-15 12:56:29.981285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.235 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:22.235 "tick_rate": 2100000000, 00:16:22.235 "poll_groups": [ 00:16:22.235 { 00:16:22.235 "name": "nvmf_tgt_poll_group_000", 00:16:22.235 "admin_qpairs": 2, 00:16:22.235 "io_qpairs": 168, 00:16:22.235 "current_admin_qpairs": 0, 00:16:22.235 "current_io_qpairs": 0, 00:16:22.235 "pending_bdev_io": 0, 00:16:22.235 "completed_nvme_io": 218, 00:16:22.235 "transports": [ 00:16:22.235 { 00:16:22.235 "trtype": "TCP" 00:16:22.235 } 00:16:22.235 ] 00:16:22.235 }, 00:16:22.235 { 00:16:22.235 "name": "nvmf_tgt_poll_group_001", 00:16:22.235 "admin_qpairs": 2, 00:16:22.235 "io_qpairs": 168, 00:16:22.235 "current_admin_qpairs": 0, 00:16:22.235 "current_io_qpairs": 0, 00:16:22.235 "pending_bdev_io": 0, 00:16:22.235 "completed_nvme_io": 268, 00:16:22.236 "transports": [ 00:16:22.236 { 00:16:22.236 "trtype": "TCP" 00:16:22.236 } 00:16:22.236 ] 00:16:22.236 }, 00:16:22.236 { 00:16:22.236 "name": "nvmf_tgt_poll_group_002", 00:16:22.236 "admin_qpairs": 1, 00:16:22.236 "io_qpairs": 168, 00:16:22.236 "current_admin_qpairs": 0, 00:16:22.236 "current_io_qpairs": 0, 00:16:22.236 "pending_bdev_io": 0, 00:16:22.236 "completed_nvme_io": 269, 00:16:22.236 "transports": [ 00:16:22.236 { 00:16:22.236 "trtype": "TCP" 00:16:22.236 } 00:16:22.236 ] 00:16:22.236 }, 00:16:22.236 { 00:16:22.236 "name": "nvmf_tgt_poll_group_003", 00:16:22.236 "admin_qpairs": 2, 00:16:22.236 "io_qpairs": 168, 00:16:22.236 "current_admin_qpairs": 0, 00:16:22.236 "current_io_qpairs": 0, 00:16:22.236 "pending_bdev_io": 0, 00:16:22.236 "completed_nvme_io": 267, 00:16:22.236 "transports": [ 00:16:22.236 { 00:16:22.236 "trtype": "TCP" 00:16:22.236 } 00:16:22.236 ] 00:16:22.236 } 00:16:22.236 ] 00:16:22.236 }' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.236 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.236 rmmod nvme_tcp 00:16:22.236 rmmod nvme_fabrics 00:16:22.495 rmmod nvme_keyring 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 935011 ']' 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 935011 ']' 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935011' 00:16:22.495 killing process with pid 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 935011 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.495 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.754 12:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.660 00:16:24.660 real 0m32.837s 00:16:24.660 user 1m39.326s 00:16:24.660 sys 0m6.367s 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.660 ************************************ 00:16:24.660 END TEST nvmf_rpc 00:16:24.660 ************************************ 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.660 ************************************ 00:16:24.660 START TEST nvmf_invalid 00:16:24.660 ************************************ 00:16:24.660 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:24.920 * Looking for test storage... 00:16:24.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.920 --rc genhtml_branch_coverage=1 00:16:24.920 --rc genhtml_function_coverage=1 00:16:24.920 --rc genhtml_legend=1 00:16:24.920 --rc geninfo_all_blocks=1 00:16:24.920 --rc geninfo_unexecuted_blocks=1 00:16:24.920 00:16:24.920 ' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.920 --rc genhtml_branch_coverage=1 00:16:24.920 --rc genhtml_function_coverage=1 00:16:24.920 --rc genhtml_legend=1 00:16:24.920 --rc geninfo_all_blocks=1 00:16:24.920 --rc geninfo_unexecuted_blocks=1 00:16:24.920 00:16:24.920 ' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.920 --rc genhtml_branch_coverage=1 00:16:24.920 --rc genhtml_function_coverage=1 00:16:24.920 --rc genhtml_legend=1 00:16:24.920 --rc geninfo_all_blocks=1 00:16:24.920 --rc geninfo_unexecuted_blocks=1 00:16:24.920 00:16:24.920 ' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.920 --rc genhtml_branch_coverage=1 00:16:24.920 --rc genhtml_function_coverage=1 00:16:24.920 --rc genhtml_legend=1 00:16:24.920 --rc geninfo_all_blocks=1 00:16:24.920 --rc geninfo_unexecuted_blocks=1 00:16:24.920 00:16:24.920 ' 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.920 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.921 12:56:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:31.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:31.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:31.494 Found net devices under 0000:af:00.0: cvl_0_0 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:31.494 Found net devices under 0000:af:00.1: cvl_0_1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:31.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:16:31.494 00:16:31.494 --- 10.0.0.2 ping statistics --- 00:16:31.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.494 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:16:31.494 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:31.495 00:16:31.495 --- 10.0.0.1 ping statistics --- 00:16:31.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.495 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=942684 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 942684 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 942684 ']' 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.495 [2024-12-15 12:56:38.792385] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:31.495 [2024-12-15 12:56:38.792428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.495 [2024-12-15 12:56:38.870686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.495 [2024-12-15 12:56:38.893845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.495 [2024-12-15 12:56:38.893883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.495 [2024-12-15 12:56:38.893890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.495 [2024-12-15 12:56:38.893896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.495 [2024-12-15 12:56:38.893901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.495 [2024-12-15 12:56:38.895229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.495 [2024-12-15 12:56:38.895340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.495 [2024-12-15 12:56:38.895446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.495 [2024-12-15 12:56:38.895447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.495 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30239 00:16:31.495 [2024-12-15 12:56:39.188726] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:31.495 { 00:16:31.495 "nqn": "nqn.2016-06.io.spdk:cnode30239", 00:16:31.495 "tgt_name": "foobar", 00:16:31.495 "method": "nvmf_create_subsystem", 00:16:31.495 "req_id": 1 00:16:31.495 } 00:16:31.495 Got JSON-RPC error response 00:16:31.495 response: 00:16:31.495 { 00:16:31.495 "code": -32603, 00:16:31.495 "message": "Unable to find target foobar" 00:16:31.495 }' 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:31.495 { 00:16:31.495 "nqn": "nqn.2016-06.io.spdk:cnode30239", 00:16:31.495 "tgt_name": "foobar", 00:16:31.495 "method": "nvmf_create_subsystem", 00:16:31.495 "req_id": 1 00:16:31.495 } 00:16:31.495 Got JSON-RPC error response 00:16:31.495 response: 00:16:31.495 { 00:16:31.495 "code": -32603, 00:16:31.495 "message": "Unable to find target foobar" 00:16:31.495 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:31.495 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26073 00:16:31.495 [2024-12-15 12:56:39.389384] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26073: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:31.754 { 00:16:31.754 "nqn": "nqn.2016-06.io.spdk:cnode26073", 00:16:31.754 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:31.754 "method": "nvmf_create_subsystem", 00:16:31.754 "req_id": 1 00:16:31.754 } 00:16:31.754 Got JSON-RPC error response 00:16:31.754 response: 00:16:31.754 { 00:16:31.754 "code": -32602, 00:16:31.754 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:31.754 }' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:31.754 { 00:16:31.754 "nqn": "nqn.2016-06.io.spdk:cnode26073", 00:16:31.754 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:31.754 "method": "nvmf_create_subsystem", 00:16:31.754 "req_id": 1 00:16:31.754 } 00:16:31.754 Got JSON-RPC error response 00:16:31.754 response: 00:16:31.754 { 00:16:31.754 "code": -32602, 00:16:31.754 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:31.754 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15730 00:16:31.754 [2024-12-15 12:56:39.586025] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15730: invalid model number 'SPDK_Controller' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:31.754 { 00:16:31.754 "nqn": "nqn.2016-06.io.spdk:cnode15730", 00:16:31.754 "model_number": "SPDK_Controller\u001f", 00:16:31.754 "method": "nvmf_create_subsystem", 00:16:31.754 "req_id": 1 00:16:31.754 } 00:16:31.754 Got JSON-RPC error response 00:16:31.754 response: 00:16:31.754 { 00:16:31.754 "code": -32602, 00:16:31.754 "message": "Invalid MN SPDK_Controller\u001f" 00:16:31.754 }' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:31.754 { 00:16:31.754 "nqn": "nqn.2016-06.io.spdk:cnode15730", 00:16:31.754 "model_number": "SPDK_Controller\u001f", 00:16:31.754 "method": "nvmf_create_subsystem", 00:16:31.754 "req_id": 1 00:16:31.754 } 00:16:31.754 Got JSON-RPC error response 00:16:31.754 response: 00:16:31.754 { 00:16:31.754 "code": -32602, 00:16:31.754 "message": "Invalid MN SPDK_Controller\u001f" 00:16:31.754 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:31.754 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:32.013 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}BgYoCwoh G{ aXX?V6Kp' 00:16:32.014 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '}BgYoCwoh G{ aXX?V6Kp' nqn.2016-06.io.spdk:cnode21463 00:16:32.274 [2024-12-15 12:56:39.951260] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21463: invalid serial number '}BgYoCwoh G{ aXX?V6Kp' 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:32.274 { 00:16:32.274 "nqn": "nqn.2016-06.io.spdk:cnode21463", 00:16:32.274 "serial_number": "}BgYoCwoh G{ aXX?V6Kp", 00:16:32.274 "method": "nvmf_create_subsystem", 00:16:32.274 "req_id": 1 00:16:32.274 } 00:16:32.274 Got JSON-RPC error response 00:16:32.274 response: 00:16:32.274 { 00:16:32.274 "code": -32602, 00:16:32.274 "message": "Invalid SN }BgYoCwoh G{ aXX?V6Kp" 00:16:32.274 }' 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:32.274 { 00:16:32.274 "nqn": "nqn.2016-06.io.spdk:cnode21463", 00:16:32.274 "serial_number": "}BgYoCwoh G{ aXX?V6Kp", 00:16:32.274 "method": "nvmf_create_subsystem", 00:16:32.274 "req_id": 1 00:16:32.274 } 00:16:32.274 Got JSON-RPC error response 00:16:32.274 response: 00:16:32.274 { 00:16:32.274 "code": -32602, 00:16:32.274 "message": "Invalid SN }BgYoCwoh G{ aXX?V6Kp" 00:16:32.274 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:32.274 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.275 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r"wP?Z' 00:16:32.535 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r"wP?Z' nqn.2016-06.io.spdk:cnode1516 00:16:32.535 [2024-12-15 12:56:40.416779] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1516: invalid model number 'kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r"wP?Z' 00:16:32.794 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:32.794 { 00:16:32.794 "nqn": "nqn.2016-06.io.spdk:cnode1516", 00:16:32.794 "model_number": "kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r\"wP?Z", 00:16:32.794 "method": "nvmf_create_subsystem", 00:16:32.794 "req_id": 1 00:16:32.794 } 00:16:32.794 Got JSON-RPC error response 00:16:32.794 response: 00:16:32.794 { 00:16:32.794 "code": -32602, 00:16:32.794 "message": "Invalid MN kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r\"wP?Z" 00:16:32.794 }' 00:16:32.794 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:32.794 { 00:16:32.794 "nqn": "nqn.2016-06.io.spdk:cnode1516", 00:16:32.794 "model_number": "kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r\"wP?Z", 00:16:32.794 "method": "nvmf_create_subsystem", 00:16:32.794 "req_id": 1 00:16:32.794 } 00:16:32.794 Got JSON-RPC error response 00:16:32.794 response: 00:16:32.794 { 00:16:32.794 "code": -32602, 00:16:32.794 "message": "Invalid MN kJ@l/@K0+]LHz(a~ &RXBHl:EL1YhU!+>i&r\"wP?Z" 00:16:32.794 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:32.794 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:32.794 [2024-12-15 12:56:40.609440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.794 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:33.053 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:33.053 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:33.053 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:33.053 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:33.053 12:56:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:33.312 [2024-12-15 12:56:41.010748] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:33.312 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:33.312 { 00:16:33.312 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:33.312 "listen_address": { 00:16:33.312 "trtype": "tcp", 00:16:33.312 "traddr": "", 00:16:33.312 "trsvcid": "4421" 00:16:33.312 }, 00:16:33.312 "method": "nvmf_subsystem_remove_listener", 00:16:33.312 "req_id": 1 00:16:33.312 } 00:16:33.312 Got JSON-RPC error response 00:16:33.312 response: 00:16:33.312 { 00:16:33.312 "code": -32602, 00:16:33.312 "message": "Invalid parameters" 00:16:33.312 }' 00:16:33.312 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:33.312 { 00:16:33.312 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:33.312 "listen_address": { 00:16:33.312 "trtype": "tcp", 00:16:33.312 "traddr": "", 00:16:33.312 "trsvcid": "4421" 00:16:33.312 }, 00:16:33.312 "method": "nvmf_subsystem_remove_listener", 00:16:33.312 "req_id": 1 00:16:33.312 } 00:16:33.312 Got JSON-RPC error response 00:16:33.312 response: 00:16:33.312 { 00:16:33.312 "code": -32602, 00:16:33.312 "message": "Invalid parameters" 00:16:33.313 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:33.313 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25038 -i 0 00:16:33.572 [2024-12-15 12:56:41.223414] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25038: invalid cntlid range [0-65519] 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:33.572 { 00:16:33.572 "nqn": "nqn.2016-06.io.spdk:cnode25038", 00:16:33.572 "min_cntlid": 0, 00:16:33.572 "method": "nvmf_create_subsystem", 00:16:33.572 "req_id": 1 00:16:33.572 } 00:16:33.572 Got JSON-RPC error response 00:16:33.572 response: 00:16:33.572 { 00:16:33.572 "code": -32602, 00:16:33.572 "message": "Invalid cntlid range [0-65519]" 00:16:33.572 }' 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:33.572 { 00:16:33.572 "nqn": "nqn.2016-06.io.spdk:cnode25038", 00:16:33.572 "min_cntlid": 0, 00:16:33.572 "method": "nvmf_create_subsystem", 00:16:33.572 "req_id": 1 00:16:33.572 } 00:16:33.572 Got JSON-RPC error response 00:16:33.572 response: 00:16:33.572 { 00:16:33.572 "code": -32602, 00:16:33.572 "message": "Invalid cntlid range [0-65519]" 00:16:33.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4484 -i 65520 00:16:33.572 [2024-12-15 12:56:41.432103] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4484: invalid cntlid range [65520-65519] 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:33.572 { 00:16:33.572 "nqn": "nqn.2016-06.io.spdk:cnode4484", 00:16:33.572 "min_cntlid": 65520, 00:16:33.572 "method": "nvmf_create_subsystem", 00:16:33.572 "req_id": 1 00:16:33.572 } 00:16:33.572 Got JSON-RPC error response 00:16:33.572 response: 00:16:33.572 { 00:16:33.572 "code": -32602, 00:16:33.572 "message": "Invalid cntlid range [65520-65519]" 00:16:33.572 }' 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:33.572 { 00:16:33.572 "nqn": "nqn.2016-06.io.spdk:cnode4484", 00:16:33.572 "min_cntlid": 65520, 00:16:33.572 "method": "nvmf_create_subsystem", 00:16:33.572 "req_id": 1 00:16:33.572 } 00:16:33.572 Got JSON-RPC error response 00:16:33.572 response: 00:16:33.572 { 00:16:33.572 "code": -32602, 00:16:33.572 "message": "Invalid cntlid range [65520-65519]" 00:16:33.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:33.572 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13750 -I 0 00:16:33.830 [2024-12-15 12:56:41.632785] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13750: invalid cntlid range [1-0] 00:16:33.830 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:33.830 { 00:16:33.830 "nqn": "nqn.2016-06.io.spdk:cnode13750", 00:16:33.830 "max_cntlid": 0, 00:16:33.830 "method": "nvmf_create_subsystem", 00:16:33.830 "req_id": 1 00:16:33.830 } 00:16:33.830 Got JSON-RPC error response 00:16:33.830 response: 00:16:33.830 { 00:16:33.830 "code": -32602, 00:16:33.830 "message": "Invalid cntlid range [1-0]" 00:16:33.830 }' 00:16:33.830 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:33.830 { 00:16:33.830 "nqn": "nqn.2016-06.io.spdk:cnode13750", 00:16:33.830 "max_cntlid": 0, 00:16:33.830 "method": "nvmf_create_subsystem", 00:16:33.830 "req_id": 1 00:16:33.830 } 00:16:33.830 Got JSON-RPC error response 00:16:33.830 response: 00:16:33.830 { 00:16:33.830 "code": -32602, 00:16:33.830 "message": "Invalid cntlid range [1-0]" 00:16:33.830 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:33.830 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31666 -I 65520 00:16:34.089 [2024-12-15 12:56:41.837483] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31666: invalid cntlid range [1-65520] 00:16:34.089 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:34.089 { 00:16:34.089 "nqn": "nqn.2016-06.io.spdk:cnode31666", 00:16:34.089 "max_cntlid": 65520, 00:16:34.089 "method": "nvmf_create_subsystem", 00:16:34.089 "req_id": 1 00:16:34.089 } 00:16:34.089 Got JSON-RPC error response 00:16:34.089 response: 00:16:34.089 { 00:16:34.089 "code": -32602, 00:16:34.089 "message": "Invalid cntlid range [1-65520]" 00:16:34.089 }' 00:16:34.089 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:34.089 { 00:16:34.089 "nqn": "nqn.2016-06.io.spdk:cnode31666", 00:16:34.089 "max_cntlid": 65520, 00:16:34.089 "method": "nvmf_create_subsystem", 00:16:34.089 "req_id": 1 00:16:34.089 } 00:16:34.089 Got JSON-RPC error response 00:16:34.089 response: 00:16:34.089 { 00:16:34.089 "code": -32602, 00:16:34.089 "message": "Invalid cntlid range [1-65520]" 00:16:34.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:34.089 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18573 -i 6 -I 5 00:16:34.347 [2024-12-15 12:56:42.034186] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18573: invalid cntlid range [6-5] 00:16:34.347 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:34.347 { 00:16:34.347 "nqn": "nqn.2016-06.io.spdk:cnode18573", 00:16:34.347 "min_cntlid": 6, 00:16:34.347 "max_cntlid": 5, 00:16:34.347 "method": "nvmf_create_subsystem", 00:16:34.347 "req_id": 1 00:16:34.347 } 00:16:34.347 Got JSON-RPC error response 00:16:34.347 response: 00:16:34.347 { 00:16:34.347 "code": -32602, 00:16:34.347 "message": "Invalid cntlid range [6-5]" 00:16:34.347 }' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:34.348 { 00:16:34.348 "nqn": "nqn.2016-06.io.spdk:cnode18573", 00:16:34.348 "min_cntlid": 6, 00:16:34.348 "max_cntlid": 5, 00:16:34.348 "method": "nvmf_create_subsystem", 00:16:34.348 "req_id": 1 00:16:34.348 } 00:16:34.348 Got JSON-RPC error response 00:16:34.348 response: 00:16:34.348 { 00:16:34.348 "code": -32602, 00:16:34.348 "message": "Invalid cntlid range [6-5]" 00:16:34.348 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:34.348 { 00:16:34.348 "name": "foobar", 00:16:34.348 "method": "nvmf_delete_target", 00:16:34.348 "req_id": 1 00:16:34.348 } 00:16:34.348 Got JSON-RPC error response 00:16:34.348 response: 00:16:34.348 { 00:16:34.348 "code": -32602, 00:16:34.348 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:34.348 }' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:34.348 { 00:16:34.348 "name": "foobar", 00:16:34.348 "method": "nvmf_delete_target", 00:16:34.348 "req_id": 1 00:16:34.348 } 00:16:34.348 Got JSON-RPC error response 00:16:34.348 response: 00:16:34.348 { 00:16:34.348 "code": -32602, 00:16:34.348 "message": "The specified target doesn't exist, cannot delete it." 00:16:34.348 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.348 rmmod nvme_tcp 00:16:34.348 rmmod nvme_fabrics 00:16:34.348 rmmod nvme_keyring 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 942684 ']' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 942684 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 942684 ']' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 942684 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.348 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942684 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942684' 00:16:34.607 killing process with pid 942684 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 942684 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 942684 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.607 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.144 00:16:37.144 real 0m11.988s 00:16:37.144 user 0m18.443s 00:16:37.144 sys 0m5.345s 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:37.144 ************************************ 00:16:37.144 END TEST nvmf_invalid 00:16:37.144 ************************************ 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.144 ************************************ 00:16:37.144 START TEST nvmf_connect_stress 00:16:37.144 ************************************ 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:37.144 * Looking for test storage... 00:16:37.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.144 --rc genhtml_branch_coverage=1 00:16:37.144 --rc genhtml_function_coverage=1 00:16:37.144 --rc genhtml_legend=1 00:16:37.144 --rc geninfo_all_blocks=1 00:16:37.144 --rc geninfo_unexecuted_blocks=1 00:16:37.144 00:16:37.144 ' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.144 --rc genhtml_branch_coverage=1 00:16:37.144 --rc genhtml_function_coverage=1 00:16:37.144 --rc genhtml_legend=1 00:16:37.144 --rc geninfo_all_blocks=1 00:16:37.144 --rc geninfo_unexecuted_blocks=1 00:16:37.144 00:16:37.144 ' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.144 --rc genhtml_branch_coverage=1 00:16:37.144 --rc genhtml_function_coverage=1 00:16:37.144 --rc genhtml_legend=1 00:16:37.144 --rc geninfo_all_blocks=1 00:16:37.144 --rc geninfo_unexecuted_blocks=1 00:16:37.144 00:16:37.144 ' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.144 --rc genhtml_branch_coverage=1 00:16:37.144 --rc genhtml_function_coverage=1 00:16:37.144 --rc genhtml_legend=1 00:16:37.144 --rc geninfo_all_blocks=1 00:16:37.144 --rc geninfo_unexecuted_blocks=1 00:16:37.144 00:16:37.144 ' 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.144 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.145 12:56:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:43.720 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:43.720 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.720 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:43.721 Found net devices under 0000:af:00.0: cvl_0_0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:43.721 Found net devices under 0000:af:00.1: cvl_0_1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:43.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:16:43.721 00:16:43.721 --- 10.0.0.2 ping statistics --- 00:16:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.721 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:16:43.721 00:16:43.721 --- 10.0.0.1 ping statistics --- 00:16:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.721 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=946777 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 946777 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 946777 ']' 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 [2024-12-15 12:56:50.792777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:43.721 [2024-12-15 12:56:50.792851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.721 [2024-12-15 12:56:50.876410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.721 [2024-12-15 12:56:50.898664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.721 [2024-12-15 12:56:50.898700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.721 [2024-12-15 12:56:50.898706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.721 [2024-12-15 12:56:50.898713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.721 [2024-12-15 12:56:50.898719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.721 [2024-12-15 12:56:50.900019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.721 [2024-12-15 12:56:50.900127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.721 [2024-12-15 12:56:50.900128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.721 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 [2024-12-15 12:56:51.039457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.721 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.722 [2024-12-15 12:56:51.059670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.722 NULL1 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=946961 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.722 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.981 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.981 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:43.981 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.981 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.981 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.240 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.240 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:44.240 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.240 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.240 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.808 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.808 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:44.808 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.808 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.808 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.067 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.067 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:45.067 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.067 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.067 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.326 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.326 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:45.326 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.326 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.326 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.585 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:45.585 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.585 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.585 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.152 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:46.152 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.152 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.152 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.411 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.411 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:46.411 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.411 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.411 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.670 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.670 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:46.670 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.670 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.670 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.929 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.929 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:46.929 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.929 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.929 12:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.188 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.188 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:47.188 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.188 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.188 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.754 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.754 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:47.754 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.754 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.754 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.013 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.013 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:48.013 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.013 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.013 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.272 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.272 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:48.272 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.272 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.272 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.531 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.531 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:48.531 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.531 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.531 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.789 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.789 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:48.789 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.789 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.789 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.356 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.356 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:49.356 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.356 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.356 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.615 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.615 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:49.615 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.615 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.615 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.874 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.874 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:49.874 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.874 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.874 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.133 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.133 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:50.133 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.133 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.133 12:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.701 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.701 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:50.701 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.701 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.701 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.960 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.960 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:50.960 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.960 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.960 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.219 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.219 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:51.219 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.219 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.219 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.478 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.478 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:51.478 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.478 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.478 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.737 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.737 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:51.737 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.737 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.737 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.304 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.304 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:52.304 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.304 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.304 12:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.563 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.563 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:52.563 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.563 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.563 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.821 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.821 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:52.821 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.821 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.821 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.080 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.080 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:53.080 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.080 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.080 12:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 946961 00:16:53.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (946961) - No such process 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 946961 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.598 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.598 rmmod nvme_tcp 00:16:53.598 rmmod nvme_fabrics 00:16:53.598 rmmod nvme_keyring 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 946777 ']' 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 946777 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 946777 ']' 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 946777 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946777 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946777' 00:16:53.599 killing process with pid 946777 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 946777 00:16:53.599 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 946777 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.858 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:55.763 00:16:55.763 real 0m18.992s 00:16:55.763 user 0m39.232s 00:16:55.763 sys 0m8.584s 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.763 ************************************ 00:16:55.763 END TEST nvmf_connect_stress 00:16:55.763 ************************************ 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.763 12:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.023 ************************************ 00:16:56.023 START TEST nvmf_fused_ordering 00:16:56.023 ************************************ 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:56.023 * Looking for test storage... 00:16:56.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.023 --rc genhtml_branch_coverage=1 00:16:56.023 --rc genhtml_function_coverage=1 00:16:56.023 --rc genhtml_legend=1 00:16:56.023 --rc geninfo_all_blocks=1 00:16:56.023 --rc geninfo_unexecuted_blocks=1 00:16:56.023 00:16:56.023 ' 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.023 --rc genhtml_branch_coverage=1 00:16:56.023 --rc genhtml_function_coverage=1 00:16:56.023 --rc genhtml_legend=1 00:16:56.023 --rc geninfo_all_blocks=1 00:16:56.023 --rc geninfo_unexecuted_blocks=1 00:16:56.023 00:16:56.023 ' 00:16:56.023 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.023 --rc genhtml_branch_coverage=1 00:16:56.023 --rc genhtml_function_coverage=1 00:16:56.023 --rc genhtml_legend=1 00:16:56.023 --rc geninfo_all_blocks=1 00:16:56.023 --rc geninfo_unexecuted_blocks=1 00:16:56.023 00:16:56.024 ' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.024 --rc genhtml_branch_coverage=1 00:16:56.024 --rc genhtml_function_coverage=1 00:16:56.024 --rc genhtml_legend=1 00:16:56.024 --rc geninfo_all_blocks=1 00:16:56.024 --rc geninfo_unexecuted_blocks=1 00:16:56.024 00:16:56.024 ' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.024 12:57:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:02.597 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:02.598 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:02.598 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:02.598 Found net devices under 0000:af:00.0: cvl_0_0 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:02.598 Found net devices under 0000:af:00.1: cvl_0_1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:17:02.598 00:17:02.598 --- 10.0.0.2 ping statistics --- 00:17:02.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.598 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:17:02.598 00:17:02.598 --- 10.0.0.1 ping statistics --- 00:17:02.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.598 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.598 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=952572 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 952572 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 952572 ']' 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.599 12:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 [2024-12-15 12:57:09.807915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:02.599 [2024-12-15 12:57:09.807960] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.599 [2024-12-15 12:57:09.883596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.599 [2024-12-15 12:57:09.904394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.599 [2024-12-15 12:57:09.904430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.599 [2024-12-15 12:57:09.904437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.599 [2024-12-15 12:57:09.904444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.599 [2024-12-15 12:57:09.904449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.599 [2024-12-15 12:57:09.904945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 [2024-12-15 12:57:10.043456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 [2024-12-15 12:57:10.063625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 NULL1 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.599 12:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:02.599 [2024-12-15 12:57:10.121428] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:02.599 [2024-12-15 12:57:10.121459] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952595 ] 00:17:02.859 Attached to nqn.2016-06.io.spdk:cnode1 00:17:02.859 Namespace ID: 1 size: 1GB 00:17:02.859 fused_ordering(0) 00:17:02.859 fused_ordering(1) 00:17:02.859 fused_ordering(2) 00:17:02.859 fused_ordering(3) 00:17:02.859 fused_ordering(4) 00:17:02.859 fused_ordering(5) 00:17:02.859 fused_ordering(6) 00:17:02.859 fused_ordering(7) 00:17:02.859 fused_ordering(8) 00:17:02.859 fused_ordering(9) 00:17:02.859 fused_ordering(10) 00:17:02.859 fused_ordering(11) 00:17:02.859 fused_ordering(12) 00:17:02.859 fused_ordering(13) 00:17:02.859 fused_ordering(14) 00:17:02.859 fused_ordering(15) 00:17:02.859 fused_ordering(16) 00:17:02.859 fused_ordering(17) 00:17:02.859 fused_ordering(18) 00:17:02.859 fused_ordering(19) 00:17:02.859 fused_ordering(20) 00:17:02.859 fused_ordering(21) 00:17:02.859 fused_ordering(22) 00:17:02.859 fused_ordering(23) 00:17:02.859 fused_ordering(24) 00:17:02.859 fused_ordering(25) 00:17:02.859 fused_ordering(26) 00:17:02.859 fused_ordering(27) 00:17:02.859 fused_ordering(28) 00:17:02.859 fused_ordering(29) 00:17:02.859 fused_ordering(30) 00:17:02.859 fused_ordering(31) 00:17:02.859 fused_ordering(32) 00:17:02.859 fused_ordering(33) 00:17:02.859 fused_ordering(34) 00:17:02.859 fused_ordering(35) 00:17:02.859 fused_ordering(36) 00:17:02.859 fused_ordering(37) 00:17:02.859 fused_ordering(38) 00:17:02.859 fused_ordering(39) 00:17:02.859 fused_ordering(40) 00:17:02.859 fused_ordering(41) 00:17:02.859 fused_ordering(42) 00:17:02.859 fused_ordering(43) 00:17:02.859 fused_ordering(44) 00:17:02.859 fused_ordering(45) 00:17:02.859 fused_ordering(46) 00:17:02.859 fused_ordering(47) 00:17:02.859 fused_ordering(48) 00:17:02.859 fused_ordering(49) 00:17:02.859 fused_ordering(50) 00:17:02.859 fused_ordering(51) 00:17:02.859 fused_ordering(52) 00:17:02.859 fused_ordering(53) 00:17:02.859 fused_ordering(54) 00:17:02.859 fused_ordering(55) 00:17:02.859 fused_ordering(56) 00:17:02.859 fused_ordering(57) 00:17:02.859 fused_ordering(58) 00:17:02.859 fused_ordering(59) 00:17:02.859 fused_ordering(60) 00:17:02.859 fused_ordering(61) 00:17:02.859 fused_ordering(62) 00:17:02.859 fused_ordering(63) 00:17:02.859 fused_ordering(64) 00:17:02.859 fused_ordering(65) 00:17:02.859 fused_ordering(66) 00:17:02.859 fused_ordering(67) 00:17:02.859 fused_ordering(68) 00:17:02.859 fused_ordering(69) 00:17:02.859 fused_ordering(70) 00:17:02.859 fused_ordering(71) 00:17:02.859 fused_ordering(72) 00:17:02.859 fused_ordering(73) 00:17:02.859 fused_ordering(74) 00:17:02.859 fused_ordering(75) 00:17:02.859 fused_ordering(76) 00:17:02.859 fused_ordering(77) 00:17:02.859 fused_ordering(78) 00:17:02.859 fused_ordering(79) 00:17:02.859 fused_ordering(80) 00:17:02.859 fused_ordering(81) 00:17:02.859 fused_ordering(82) 00:17:02.859 fused_ordering(83) 00:17:02.859 fused_ordering(84) 00:17:02.859 fused_ordering(85) 00:17:02.859 fused_ordering(86) 00:17:02.859 fused_ordering(87) 00:17:02.859 fused_ordering(88) 00:17:02.859 fused_ordering(89) 00:17:02.859 fused_ordering(90) 00:17:02.859 fused_ordering(91) 00:17:02.859 fused_ordering(92) 00:17:02.859 fused_ordering(93) 00:17:02.859 fused_ordering(94) 00:17:02.859 fused_ordering(95) 00:17:02.859 fused_ordering(96) 00:17:02.859 fused_ordering(97) 00:17:02.859 fused_ordering(98) 00:17:02.859 fused_ordering(99) 00:17:02.859 fused_ordering(100) 00:17:02.859 fused_ordering(101) 00:17:02.859 fused_ordering(102) 00:17:02.859 fused_ordering(103) 00:17:02.859 fused_ordering(104) 00:17:02.859 fused_ordering(105) 00:17:02.859 fused_ordering(106) 00:17:02.859 fused_ordering(107) 00:17:02.859 fused_ordering(108) 00:17:02.859 fused_ordering(109) 00:17:02.859 fused_ordering(110) 00:17:02.859 fused_ordering(111) 00:17:02.859 fused_ordering(112) 00:17:02.859 fused_ordering(113) 00:17:02.859 fused_ordering(114) 00:17:02.859 fused_ordering(115) 00:17:02.859 fused_ordering(116) 00:17:02.859 fused_ordering(117) 00:17:02.859 fused_ordering(118) 00:17:02.859 fused_ordering(119) 00:17:02.859 fused_ordering(120) 00:17:02.859 fused_ordering(121) 00:17:02.859 fused_ordering(122) 00:17:02.859 fused_ordering(123) 00:17:02.859 fused_ordering(124) 00:17:02.859 fused_ordering(125) 00:17:02.859 fused_ordering(126) 00:17:02.859 fused_ordering(127) 00:17:02.859 fused_ordering(128) 00:17:02.859 fused_ordering(129) 00:17:02.859 fused_ordering(130) 00:17:02.859 fused_ordering(131) 00:17:02.859 fused_ordering(132) 00:17:02.859 fused_ordering(133) 00:17:02.859 fused_ordering(134) 00:17:02.859 fused_ordering(135) 00:17:02.859 fused_ordering(136) 00:17:02.859 fused_ordering(137) 00:17:02.859 fused_ordering(138) 00:17:02.859 fused_ordering(139) 00:17:02.859 fused_ordering(140) 00:17:02.859 fused_ordering(141) 00:17:02.859 fused_ordering(142) 00:17:02.859 fused_ordering(143) 00:17:02.859 fused_ordering(144) 00:17:02.859 fused_ordering(145) 00:17:02.859 fused_ordering(146) 00:17:02.859 fused_ordering(147) 00:17:02.859 fused_ordering(148) 00:17:02.859 fused_ordering(149) 00:17:02.859 fused_ordering(150) 00:17:02.859 fused_ordering(151) 00:17:02.859 fused_ordering(152) 00:17:02.859 fused_ordering(153) 00:17:02.859 fused_ordering(154) 00:17:02.859 fused_ordering(155) 00:17:02.859 fused_ordering(156) 00:17:02.859 fused_ordering(157) 00:17:02.859 fused_ordering(158) 00:17:02.859 fused_ordering(159) 00:17:02.859 fused_ordering(160) 00:17:02.859 fused_ordering(161) 00:17:02.859 fused_ordering(162) 00:17:02.859 fused_ordering(163) 00:17:02.859 fused_ordering(164) 00:17:02.859 fused_ordering(165) 00:17:02.859 fused_ordering(166) 00:17:02.859 fused_ordering(167) 00:17:02.859 fused_ordering(168) 00:17:02.859 fused_ordering(169) 00:17:02.859 fused_ordering(170) 00:17:02.859 fused_ordering(171) 00:17:02.859 fused_ordering(172) 00:17:02.859 fused_ordering(173) 00:17:02.859 fused_ordering(174) 00:17:02.859 fused_ordering(175) 00:17:02.859 fused_ordering(176) 00:17:02.859 fused_ordering(177) 00:17:02.859 fused_ordering(178) 00:17:02.859 fused_ordering(179) 00:17:02.859 fused_ordering(180) 00:17:02.859 fused_ordering(181) 00:17:02.859 fused_ordering(182) 00:17:02.859 fused_ordering(183) 00:17:02.859 fused_ordering(184) 00:17:02.859 fused_ordering(185) 00:17:02.859 fused_ordering(186) 00:17:02.859 fused_ordering(187) 00:17:02.859 fused_ordering(188) 00:17:02.859 fused_ordering(189) 00:17:02.859 fused_ordering(190) 00:17:02.859 fused_ordering(191) 00:17:02.859 fused_ordering(192) 00:17:02.859 fused_ordering(193) 00:17:02.859 fused_ordering(194) 00:17:02.859 fused_ordering(195) 00:17:02.859 fused_ordering(196) 00:17:02.859 fused_ordering(197) 00:17:02.859 fused_ordering(198) 00:17:02.859 fused_ordering(199) 00:17:02.859 fused_ordering(200) 00:17:02.859 fused_ordering(201) 00:17:02.859 fused_ordering(202) 00:17:02.859 fused_ordering(203) 00:17:02.859 fused_ordering(204) 00:17:02.859 fused_ordering(205) 00:17:02.859 fused_ordering(206) 00:17:02.859 fused_ordering(207) 00:17:02.859 fused_ordering(208) 00:17:02.859 fused_ordering(209) 00:17:02.859 fused_ordering(210) 00:17:02.859 fused_ordering(211) 00:17:02.859 fused_ordering(212) 00:17:02.859 fused_ordering(213) 00:17:02.859 fused_ordering(214) 00:17:02.859 fused_ordering(215) 00:17:02.859 fused_ordering(216) 00:17:02.859 fused_ordering(217) 00:17:02.859 fused_ordering(218) 00:17:02.859 fused_ordering(219) 00:17:02.859 fused_ordering(220) 00:17:02.859 fused_ordering(221) 00:17:02.859 fused_ordering(222) 00:17:02.859 fused_ordering(223) 00:17:02.859 fused_ordering(224) 00:17:02.859 fused_ordering(225) 00:17:02.859 fused_ordering(226) 00:17:02.859 fused_ordering(227) 00:17:02.859 fused_ordering(228) 00:17:02.859 fused_ordering(229) 00:17:02.859 fused_ordering(230) 00:17:02.859 fused_ordering(231) 00:17:02.859 fused_ordering(232) 00:17:02.859 fused_ordering(233) 00:17:02.859 fused_ordering(234) 00:17:02.859 fused_ordering(235) 00:17:02.859 fused_ordering(236) 00:17:02.859 fused_ordering(237) 00:17:02.859 fused_ordering(238) 00:17:02.859 fused_ordering(239) 00:17:02.859 fused_ordering(240) 00:17:02.859 fused_ordering(241) 00:17:02.859 fused_ordering(242) 00:17:02.859 fused_ordering(243) 00:17:02.859 fused_ordering(244) 00:17:02.859 fused_ordering(245) 00:17:02.859 fused_ordering(246) 00:17:02.859 fused_ordering(247) 00:17:02.859 fused_ordering(248) 00:17:02.859 fused_ordering(249) 00:17:02.859 fused_ordering(250) 00:17:02.859 fused_ordering(251) 00:17:02.859 fused_ordering(252) 00:17:02.859 fused_ordering(253) 00:17:02.859 fused_ordering(254) 00:17:02.859 fused_ordering(255) 00:17:02.859 fused_ordering(256) 00:17:02.859 fused_ordering(257) 00:17:02.859 fused_ordering(258) 00:17:02.859 fused_ordering(259) 00:17:02.859 fused_ordering(260) 00:17:02.859 fused_ordering(261) 00:17:02.859 fused_ordering(262) 00:17:02.859 fused_ordering(263) 00:17:02.859 fused_ordering(264) 00:17:02.859 fused_ordering(265) 00:17:02.859 fused_ordering(266) 00:17:02.859 fused_ordering(267) 00:17:02.859 fused_ordering(268) 00:17:02.859 fused_ordering(269) 00:17:02.859 fused_ordering(270) 00:17:02.859 fused_ordering(271) 00:17:02.859 fused_ordering(272) 00:17:02.859 fused_ordering(273) 00:17:02.859 fused_ordering(274) 00:17:02.860 fused_ordering(275) 00:17:02.860 fused_ordering(276) 00:17:02.860 fused_ordering(277) 00:17:02.860 fused_ordering(278) 00:17:02.860 fused_ordering(279) 00:17:02.860 fused_ordering(280) 00:17:02.860 fused_ordering(281) 00:17:02.860 fused_ordering(282) 00:17:02.860 fused_ordering(283) 00:17:02.860 fused_ordering(284) 00:17:02.860 fused_ordering(285) 00:17:02.860 fused_ordering(286) 00:17:02.860 fused_ordering(287) 00:17:02.860 fused_ordering(288) 00:17:02.860 fused_ordering(289) 00:17:02.860 fused_ordering(290) 00:17:02.860 fused_ordering(291) 00:17:02.860 fused_ordering(292) 00:17:02.860 fused_ordering(293) 00:17:02.860 fused_ordering(294) 00:17:02.860 fused_ordering(295) 00:17:02.860 fused_ordering(296) 00:17:02.860 fused_ordering(297) 00:17:02.860 fused_ordering(298) 00:17:02.860 fused_ordering(299) 00:17:02.860 fused_ordering(300) 00:17:02.860 fused_ordering(301) 00:17:02.860 fused_ordering(302) 00:17:02.860 fused_ordering(303) 00:17:02.860 fused_ordering(304) 00:17:02.860 fused_ordering(305) 00:17:02.860 fused_ordering(306) 00:17:02.860 fused_ordering(307) 00:17:02.860 fused_ordering(308) 00:17:02.860 fused_ordering(309) 00:17:02.860 fused_ordering(310) 00:17:02.860 fused_ordering(311) 00:17:02.860 fused_ordering(312) 00:17:02.860 fused_ordering(313) 00:17:02.860 fused_ordering(314) 00:17:02.860 fused_ordering(315) 00:17:02.860 fused_ordering(316) 00:17:02.860 fused_ordering(317) 00:17:02.860 fused_ordering(318) 00:17:02.860 fused_ordering(319) 00:17:02.860 fused_ordering(320) 00:17:02.860 fused_ordering(321) 00:17:02.860 fused_ordering(322) 00:17:02.860 fused_ordering(323) 00:17:02.860 fused_ordering(324) 00:17:02.860 fused_ordering(325) 00:17:02.860 fused_ordering(326) 00:17:02.860 fused_ordering(327) 00:17:02.860 fused_ordering(328) 00:17:02.860 fused_ordering(329) 00:17:02.860 fused_ordering(330) 00:17:02.860 fused_ordering(331) 00:17:02.860 fused_ordering(332) 00:17:02.860 fused_ordering(333) 00:17:02.860 fused_ordering(334) 00:17:02.860 fused_ordering(335) 00:17:02.860 fused_ordering(336) 00:17:02.860 fused_ordering(337) 00:17:02.860 fused_ordering(338) 00:17:02.860 fused_ordering(339) 00:17:02.860 fused_ordering(340) 00:17:02.860 fused_ordering(341) 00:17:02.860 fused_ordering(342) 00:17:02.860 fused_ordering(343) 00:17:02.860 fused_ordering(344) 00:17:02.860 fused_ordering(345) 00:17:02.860 fused_ordering(346) 00:17:02.860 fused_ordering(347) 00:17:02.860 fused_ordering(348) 00:17:02.860 fused_ordering(349) 00:17:02.860 fused_ordering(350) 00:17:02.860 fused_ordering(351) 00:17:02.860 fused_ordering(352) 00:17:02.860 fused_ordering(353) 00:17:02.860 fused_ordering(354) 00:17:02.860 fused_ordering(355) 00:17:02.860 fused_ordering(356) 00:17:02.860 fused_ordering(357) 00:17:02.860 fused_ordering(358) 00:17:02.860 fused_ordering(359) 00:17:02.860 fused_ordering(360) 00:17:02.860 fused_ordering(361) 00:17:02.860 fused_ordering(362) 00:17:02.860 fused_ordering(363) 00:17:02.860 fused_ordering(364) 00:17:02.860 fused_ordering(365) 00:17:02.860 fused_ordering(366) 00:17:02.860 fused_ordering(367) 00:17:02.860 fused_ordering(368) 00:17:02.860 fused_ordering(369) 00:17:02.860 fused_ordering(370) 00:17:02.860 fused_ordering(371) 00:17:02.860 fused_ordering(372) 00:17:02.860 fused_ordering(373) 00:17:02.860 fused_ordering(374) 00:17:02.860 fused_ordering(375) 00:17:02.860 fused_ordering(376) 00:17:02.860 fused_ordering(377) 00:17:02.860 fused_ordering(378) 00:17:02.860 fused_ordering(379) 00:17:02.860 fused_ordering(380) 00:17:02.860 fused_ordering(381) 00:17:02.860 fused_ordering(382) 00:17:02.860 fused_ordering(383) 00:17:02.860 fused_ordering(384) 00:17:02.860 fused_ordering(385) 00:17:02.860 fused_ordering(386) 00:17:02.860 fused_ordering(387) 00:17:02.860 fused_ordering(388) 00:17:02.860 fused_ordering(389) 00:17:02.860 fused_ordering(390) 00:17:02.860 fused_ordering(391) 00:17:02.860 fused_ordering(392) 00:17:02.860 fused_ordering(393) 00:17:02.860 fused_ordering(394) 00:17:02.860 fused_ordering(395) 00:17:02.860 fused_ordering(396) 00:17:02.860 fused_ordering(397) 00:17:02.860 fused_ordering(398) 00:17:02.860 fused_ordering(399) 00:17:02.860 fused_ordering(400) 00:17:02.860 fused_ordering(401) 00:17:02.860 fused_ordering(402) 00:17:02.860 fused_ordering(403) 00:17:02.860 fused_ordering(404) 00:17:02.860 fused_ordering(405) 00:17:02.860 fused_ordering(406) 00:17:02.860 fused_ordering(407) 00:17:02.860 fused_ordering(408) 00:17:02.860 fused_ordering(409) 00:17:02.860 fused_ordering(410) 00:17:03.428 fused_ordering(411) 00:17:03.428 fused_ordering(412) 00:17:03.428 fused_ordering(413) 00:17:03.428 fused_ordering(414) 00:17:03.428 fused_ordering(415) 00:17:03.428 fused_ordering(416) 00:17:03.428 fused_ordering(417) 00:17:03.428 fused_ordering(418) 00:17:03.428 fused_ordering(419) 00:17:03.428 fused_ordering(420) 00:17:03.428 fused_ordering(421) 00:17:03.428 fused_ordering(422) 00:17:03.428 fused_ordering(423) 00:17:03.428 fused_ordering(424) 00:17:03.428 fused_ordering(425) 00:17:03.428 fused_ordering(426) 00:17:03.428 fused_ordering(427) 00:17:03.428 fused_ordering(428) 00:17:03.428 fused_ordering(429) 00:17:03.428 fused_ordering(430) 00:17:03.428 fused_ordering(431) 00:17:03.428 fused_ordering(432) 00:17:03.428 fused_ordering(433) 00:17:03.428 fused_ordering(434) 00:17:03.428 fused_ordering(435) 00:17:03.428 fused_ordering(436) 00:17:03.428 fused_ordering(437) 00:17:03.428 fused_ordering(438) 00:17:03.428 fused_ordering(439) 00:17:03.428 fused_ordering(440) 00:17:03.428 fused_ordering(441) 00:17:03.428 fused_ordering(442) 00:17:03.428 fused_ordering(443) 00:17:03.428 fused_ordering(444) 00:17:03.428 fused_ordering(445) 00:17:03.428 fused_ordering(446) 00:17:03.428 fused_ordering(447) 00:17:03.428 fused_ordering(448) 00:17:03.428 fused_ordering(449) 00:17:03.428 fused_ordering(450) 00:17:03.428 fused_ordering(451) 00:17:03.428 fused_ordering(452) 00:17:03.428 fused_ordering(453) 00:17:03.428 fused_ordering(454) 00:17:03.428 fused_ordering(455) 00:17:03.428 fused_ordering(456) 00:17:03.428 fused_ordering(457) 00:17:03.428 fused_ordering(458) 00:17:03.428 fused_ordering(459) 00:17:03.428 fused_ordering(460) 00:17:03.428 fused_ordering(461) 00:17:03.428 fused_ordering(462) 00:17:03.428 fused_ordering(463) 00:17:03.428 fused_ordering(464) 00:17:03.428 fused_ordering(465) 00:17:03.428 fused_ordering(466) 00:17:03.428 fused_ordering(467) 00:17:03.428 fused_ordering(468) 00:17:03.428 fused_ordering(469) 00:17:03.428 fused_ordering(470) 00:17:03.428 fused_ordering(471) 00:17:03.428 fused_ordering(472) 00:17:03.428 fused_ordering(473) 00:17:03.428 fused_ordering(474) 00:17:03.428 fused_ordering(475) 00:17:03.428 fused_ordering(476) 00:17:03.428 fused_ordering(477) 00:17:03.428 fused_ordering(478) 00:17:03.428 fused_ordering(479) 00:17:03.428 fused_ordering(480) 00:17:03.428 fused_ordering(481) 00:17:03.428 fused_ordering(482) 00:17:03.428 fused_ordering(483) 00:17:03.428 fused_ordering(484) 00:17:03.428 fused_ordering(485) 00:17:03.428 fused_ordering(486) 00:17:03.428 fused_ordering(487) 00:17:03.428 fused_ordering(488) 00:17:03.428 fused_ordering(489) 00:17:03.428 fused_ordering(490) 00:17:03.428 fused_ordering(491) 00:17:03.428 fused_ordering(492) 00:17:03.428 fused_ordering(493) 00:17:03.428 fused_ordering(494) 00:17:03.428 fused_ordering(495) 00:17:03.428 fused_ordering(496) 00:17:03.428 fused_ordering(497) 00:17:03.428 fused_ordering(498) 00:17:03.428 fused_ordering(499) 00:17:03.428 fused_ordering(500) 00:17:03.428 fused_ordering(501) 00:17:03.428 fused_ordering(502) 00:17:03.428 fused_ordering(503) 00:17:03.428 fused_ordering(504) 00:17:03.428 fused_ordering(505) 00:17:03.428 fused_ordering(506) 00:17:03.428 fused_ordering(507) 00:17:03.428 fused_ordering(508) 00:17:03.428 fused_ordering(509) 00:17:03.428 fused_ordering(510) 00:17:03.428 fused_ordering(511) 00:17:03.428 fused_ordering(512) 00:17:03.428 fused_ordering(513) 00:17:03.428 fused_ordering(514) 00:17:03.428 fused_ordering(515) 00:17:03.428 fused_ordering(516) 00:17:03.428 fused_ordering(517) 00:17:03.428 fused_ordering(518) 00:17:03.428 fused_ordering(519) 00:17:03.428 fused_ordering(520) 00:17:03.428 fused_ordering(521) 00:17:03.428 fused_ordering(522) 00:17:03.428 fused_ordering(523) 00:17:03.428 fused_ordering(524) 00:17:03.428 fused_ordering(525) 00:17:03.428 fused_ordering(526) 00:17:03.428 fused_ordering(527) 00:17:03.428 fused_ordering(528) 00:17:03.428 fused_ordering(529) 00:17:03.428 fused_ordering(530) 00:17:03.428 fused_ordering(531) 00:17:03.428 fused_ordering(532) 00:17:03.428 fused_ordering(533) 00:17:03.428 fused_ordering(534) 00:17:03.428 fused_ordering(535) 00:17:03.428 fused_ordering(536) 00:17:03.428 fused_ordering(537) 00:17:03.428 fused_ordering(538) 00:17:03.428 fused_ordering(539) 00:17:03.428 fused_ordering(540) 00:17:03.428 fused_ordering(541) 00:17:03.428 fused_ordering(542) 00:17:03.428 fused_ordering(543) 00:17:03.428 fused_ordering(544) 00:17:03.428 fused_ordering(545) 00:17:03.428 fused_ordering(546) 00:17:03.428 fused_ordering(547) 00:17:03.428 fused_ordering(548) 00:17:03.428 fused_ordering(549) 00:17:03.428 fused_ordering(550) 00:17:03.428 fused_ordering(551) 00:17:03.428 fused_ordering(552) 00:17:03.428 fused_ordering(553) 00:17:03.428 fused_ordering(554) 00:17:03.428 fused_ordering(555) 00:17:03.428 fused_ordering(556) 00:17:03.428 fused_ordering(557) 00:17:03.428 fused_ordering(558) 00:17:03.428 fused_ordering(559) 00:17:03.428 fused_ordering(560) 00:17:03.428 fused_ordering(561) 00:17:03.428 fused_ordering(562) 00:17:03.428 fused_ordering(563) 00:17:03.428 fused_ordering(564) 00:17:03.428 fused_ordering(565) 00:17:03.428 fused_ordering(566) 00:17:03.428 fused_ordering(567) 00:17:03.428 fused_ordering(568) 00:17:03.428 fused_ordering(569) 00:17:03.428 fused_ordering(570) 00:17:03.428 fused_ordering(571) 00:17:03.428 fused_ordering(572) 00:17:03.428 fused_ordering(573) 00:17:03.428 fused_ordering(574) 00:17:03.428 fused_ordering(575) 00:17:03.428 fused_ordering(576) 00:17:03.428 fused_ordering(577) 00:17:03.428 fused_ordering(578) 00:17:03.428 fused_ordering(579) 00:17:03.428 fused_ordering(580) 00:17:03.428 fused_ordering(581) 00:17:03.428 fused_ordering(582) 00:17:03.428 fused_ordering(583) 00:17:03.428 fused_ordering(584) 00:17:03.428 fused_ordering(585) 00:17:03.428 fused_ordering(586) 00:17:03.428 fused_ordering(587) 00:17:03.428 fused_ordering(588) 00:17:03.428 fused_ordering(589) 00:17:03.428 fused_ordering(590) 00:17:03.428 fused_ordering(591) 00:17:03.428 fused_ordering(592) 00:17:03.428 fused_ordering(593) 00:17:03.428 fused_ordering(594) 00:17:03.428 fused_ordering(595) 00:17:03.428 fused_ordering(596) 00:17:03.428 fused_ordering(597) 00:17:03.428 fused_ordering(598) 00:17:03.428 fused_ordering(599) 00:17:03.428 fused_ordering(600) 00:17:03.428 fused_ordering(601) 00:17:03.428 fused_ordering(602) 00:17:03.428 fused_ordering(603) 00:17:03.428 fused_ordering(604) 00:17:03.428 fused_ordering(605) 00:17:03.428 fused_ordering(606) 00:17:03.428 fused_ordering(607) 00:17:03.428 fused_ordering(608) 00:17:03.428 fused_ordering(609) 00:17:03.428 fused_ordering(610) 00:17:03.428 fused_ordering(611) 00:17:03.428 fused_ordering(612) 00:17:03.428 fused_ordering(613) 00:17:03.428 fused_ordering(614) 00:17:03.428 fused_ordering(615) 00:17:03.687 fused_ordering(616) 00:17:03.687 fused_ordering(617) 00:17:03.687 fused_ordering(618) 00:17:03.687 fused_ordering(619) 00:17:03.687 fused_ordering(620) 00:17:03.687 fused_ordering(621) 00:17:03.687 fused_ordering(622) 00:17:03.687 fused_ordering(623) 00:17:03.687 fused_ordering(624) 00:17:03.687 fused_ordering(625) 00:17:03.687 fused_ordering(626) 00:17:03.687 fused_ordering(627) 00:17:03.687 fused_ordering(628) 00:17:03.687 fused_ordering(629) 00:17:03.687 fused_ordering(630) 00:17:03.687 fused_ordering(631) 00:17:03.687 fused_ordering(632) 00:17:03.687 fused_ordering(633) 00:17:03.687 fused_ordering(634) 00:17:03.687 fused_ordering(635) 00:17:03.687 fused_ordering(636) 00:17:03.687 fused_ordering(637) 00:17:03.687 fused_ordering(638) 00:17:03.687 fused_ordering(639) 00:17:03.687 fused_ordering(640) 00:17:03.687 fused_ordering(641) 00:17:03.687 fused_ordering(642) 00:17:03.687 fused_ordering(643) 00:17:03.687 fused_ordering(644) 00:17:03.687 fused_ordering(645) 00:17:03.687 fused_ordering(646) 00:17:03.687 fused_ordering(647) 00:17:03.687 fused_ordering(648) 00:17:03.687 fused_ordering(649) 00:17:03.687 fused_ordering(650) 00:17:03.687 fused_ordering(651) 00:17:03.687 fused_ordering(652) 00:17:03.687 fused_ordering(653) 00:17:03.687 fused_ordering(654) 00:17:03.687 fused_ordering(655) 00:17:03.687 fused_ordering(656) 00:17:03.687 fused_ordering(657) 00:17:03.687 fused_ordering(658) 00:17:03.687 fused_ordering(659) 00:17:03.687 fused_ordering(660) 00:17:03.687 fused_ordering(661) 00:17:03.687 fused_ordering(662) 00:17:03.687 fused_ordering(663) 00:17:03.687 fused_ordering(664) 00:17:03.687 fused_ordering(665) 00:17:03.687 fused_ordering(666) 00:17:03.687 fused_ordering(667) 00:17:03.687 fused_ordering(668) 00:17:03.687 fused_ordering(669) 00:17:03.687 fused_ordering(670) 00:17:03.687 fused_ordering(671) 00:17:03.687 fused_ordering(672) 00:17:03.687 fused_ordering(673) 00:17:03.687 fused_ordering(674) 00:17:03.687 fused_ordering(675) 00:17:03.687 fused_ordering(676) 00:17:03.687 fused_ordering(677) 00:17:03.687 fused_ordering(678) 00:17:03.687 fused_ordering(679) 00:17:03.687 fused_ordering(680) 00:17:03.687 fused_ordering(681) 00:17:03.687 fused_ordering(682) 00:17:03.687 fused_ordering(683) 00:17:03.687 fused_ordering(684) 00:17:03.687 fused_ordering(685) 00:17:03.687 fused_ordering(686) 00:17:03.687 fused_ordering(687) 00:17:03.687 fused_ordering(688) 00:17:03.687 fused_ordering(689) 00:17:03.687 fused_ordering(690) 00:17:03.687 fused_ordering(691) 00:17:03.687 fused_ordering(692) 00:17:03.687 fused_ordering(693) 00:17:03.687 fused_ordering(694) 00:17:03.687 fused_ordering(695) 00:17:03.687 fused_ordering(696) 00:17:03.687 fused_ordering(697) 00:17:03.687 fused_ordering(698) 00:17:03.687 fused_ordering(699) 00:17:03.687 fused_ordering(700) 00:17:03.687 fused_ordering(701) 00:17:03.687 fused_ordering(702) 00:17:03.687 fused_ordering(703) 00:17:03.687 fused_ordering(704) 00:17:03.687 fused_ordering(705) 00:17:03.687 fused_ordering(706) 00:17:03.687 fused_ordering(707) 00:17:03.687 fused_ordering(708) 00:17:03.687 fused_ordering(709) 00:17:03.687 fused_ordering(710) 00:17:03.687 fused_ordering(711) 00:17:03.687 fused_ordering(712) 00:17:03.687 fused_ordering(713) 00:17:03.687 fused_ordering(714) 00:17:03.687 fused_ordering(715) 00:17:03.687 fused_ordering(716) 00:17:03.687 fused_ordering(717) 00:17:03.687 fused_ordering(718) 00:17:03.687 fused_ordering(719) 00:17:03.687 fused_ordering(720) 00:17:03.687 fused_ordering(721) 00:17:03.687 fused_ordering(722) 00:17:03.687 fused_ordering(723) 00:17:03.687 fused_ordering(724) 00:17:03.687 fused_ordering(725) 00:17:03.687 fused_ordering(726) 00:17:03.687 fused_ordering(727) 00:17:03.687 fused_ordering(728) 00:17:03.687 fused_ordering(729) 00:17:03.687 fused_ordering(730) 00:17:03.687 fused_ordering(731) 00:17:03.687 fused_ordering(732) 00:17:03.687 fused_ordering(733) 00:17:03.687 fused_ordering(734) 00:17:03.687 fused_ordering(735) 00:17:03.687 fused_ordering(736) 00:17:03.687 fused_ordering(737) 00:17:03.687 fused_ordering(738) 00:17:03.687 fused_ordering(739) 00:17:03.687 fused_ordering(740) 00:17:03.687 fused_ordering(741) 00:17:03.687 fused_ordering(742) 00:17:03.687 fused_ordering(743) 00:17:03.687 fused_ordering(744) 00:17:03.687 fused_ordering(745) 00:17:03.687 fused_ordering(746) 00:17:03.687 fused_ordering(747) 00:17:03.687 fused_ordering(748) 00:17:03.687 fused_ordering(749) 00:17:03.687 fused_ordering(750) 00:17:03.687 fused_ordering(751) 00:17:03.687 fused_ordering(752) 00:17:03.687 fused_ordering(753) 00:17:03.687 fused_ordering(754) 00:17:03.687 fused_ordering(755) 00:17:03.687 fused_ordering(756) 00:17:03.687 fused_ordering(757) 00:17:03.687 fused_ordering(758) 00:17:03.687 fused_ordering(759) 00:17:03.687 fused_ordering(760) 00:17:03.687 fused_ordering(761) 00:17:03.687 fused_ordering(762) 00:17:03.687 fused_ordering(763) 00:17:03.687 fused_ordering(764) 00:17:03.687 fused_ordering(765) 00:17:03.687 fused_ordering(766) 00:17:03.687 fused_ordering(767) 00:17:03.687 fused_ordering(768) 00:17:03.687 fused_ordering(769) 00:17:03.687 fused_ordering(770) 00:17:03.687 fused_ordering(771) 00:17:03.687 fused_ordering(772) 00:17:03.687 fused_ordering(773) 00:17:03.687 fused_ordering(774) 00:17:03.687 fused_ordering(775) 00:17:03.687 fused_ordering(776) 00:17:03.687 fused_ordering(777) 00:17:03.687 fused_ordering(778) 00:17:03.687 fused_ordering(779) 00:17:03.687 fused_ordering(780) 00:17:03.687 fused_ordering(781) 00:17:03.687 fused_ordering(782) 00:17:03.687 fused_ordering(783) 00:17:03.687 fused_ordering(784) 00:17:03.687 fused_ordering(785) 00:17:03.687 fused_ordering(786) 00:17:03.687 fused_ordering(787) 00:17:03.687 fused_ordering(788) 00:17:03.687 fused_ordering(789) 00:17:03.687 fused_ordering(790) 00:17:03.687 fused_ordering(791) 00:17:03.687 fused_ordering(792) 00:17:03.687 fused_ordering(793) 00:17:03.687 fused_ordering(794) 00:17:03.687 fused_ordering(795) 00:17:03.687 fused_ordering(796) 00:17:03.687 fused_ordering(797) 00:17:03.687 fused_ordering(798) 00:17:03.687 fused_ordering(799) 00:17:03.687 fused_ordering(800) 00:17:03.687 fused_ordering(801) 00:17:03.687 fused_ordering(802) 00:17:03.687 fused_ordering(803) 00:17:03.687 fused_ordering(804) 00:17:03.687 fused_ordering(805) 00:17:03.687 fused_ordering(806) 00:17:03.687 fused_ordering(807) 00:17:03.687 fused_ordering(808) 00:17:03.687 fused_ordering(809) 00:17:03.687 fused_ordering(810) 00:17:03.687 fused_ordering(811) 00:17:03.687 fused_ordering(812) 00:17:03.687 fused_ordering(813) 00:17:03.687 fused_ordering(814) 00:17:03.687 fused_ordering(815) 00:17:03.687 fused_ordering(816) 00:17:03.687 fused_ordering(817) 00:17:03.687 fused_ordering(818) 00:17:03.687 fused_ordering(819) 00:17:03.687 fused_ordering(820) 00:17:04.256 fused_ordering(821) 00:17:04.256 fused_ordering(822) 00:17:04.256 fused_ordering(823) 00:17:04.256 fused_ordering(824) 00:17:04.256 fused_ordering(825) 00:17:04.256 fused_ordering(826) 00:17:04.256 fused_ordering(827) 00:17:04.256 fused_ordering(828) 00:17:04.256 fused_ordering(829) 00:17:04.256 fused_ordering(830) 00:17:04.256 fused_ordering(831) 00:17:04.256 fused_ordering(832) 00:17:04.256 fused_ordering(833) 00:17:04.256 fused_ordering(834) 00:17:04.256 fused_ordering(835) 00:17:04.256 fused_ordering(836) 00:17:04.256 fused_ordering(837) 00:17:04.256 fused_ordering(838) 00:17:04.256 fused_ordering(839) 00:17:04.256 fused_ordering(840) 00:17:04.256 fused_ordering(841) 00:17:04.256 fused_ordering(842) 00:17:04.256 fused_ordering(843) 00:17:04.256 fused_ordering(844) 00:17:04.256 fused_ordering(845) 00:17:04.256 fused_ordering(846) 00:17:04.256 fused_ordering(847) 00:17:04.256 fused_ordering(848) 00:17:04.256 fused_ordering(849) 00:17:04.256 fused_ordering(850) 00:17:04.256 fused_ordering(851) 00:17:04.256 fused_ordering(852) 00:17:04.256 fused_ordering(853) 00:17:04.256 fused_ordering(854) 00:17:04.256 fused_ordering(855) 00:17:04.256 fused_ordering(856) 00:17:04.256 fused_ordering(857) 00:17:04.256 fused_ordering(858) 00:17:04.256 fused_ordering(859) 00:17:04.256 fused_ordering(860) 00:17:04.256 fused_ordering(861) 00:17:04.256 fused_ordering(862) 00:17:04.256 fused_ordering(863) 00:17:04.256 fused_ordering(864) 00:17:04.256 fused_ordering(865) 00:17:04.256 fused_ordering(866) 00:17:04.256 fused_ordering(867) 00:17:04.256 fused_ordering(868) 00:17:04.256 fused_ordering(869) 00:17:04.256 fused_ordering(870) 00:17:04.256 fused_ordering(871) 00:17:04.256 fused_ordering(872) 00:17:04.256 fused_ordering(873) 00:17:04.256 fused_ordering(874) 00:17:04.256 fused_ordering(875) 00:17:04.256 fused_ordering(876) 00:17:04.256 fused_ordering(877) 00:17:04.256 fused_ordering(878) 00:17:04.256 fused_ordering(879) 00:17:04.256 fused_ordering(880) 00:17:04.256 fused_ordering(881) 00:17:04.256 fused_ordering(882) 00:17:04.256 fused_ordering(883) 00:17:04.256 fused_ordering(884) 00:17:04.256 fused_ordering(885) 00:17:04.256 fused_ordering(886) 00:17:04.256 fused_ordering(887) 00:17:04.256 fused_ordering(888) 00:17:04.256 fused_ordering(889) 00:17:04.256 fused_ordering(890) 00:17:04.256 fused_ordering(891) 00:17:04.256 fused_ordering(892) 00:17:04.256 fused_ordering(893) 00:17:04.256 fused_ordering(894) 00:17:04.256 fused_ordering(895) 00:17:04.256 fused_ordering(896) 00:17:04.256 fused_ordering(897) 00:17:04.256 fused_ordering(898) 00:17:04.256 fused_ordering(899) 00:17:04.256 fused_ordering(900) 00:17:04.256 fused_ordering(901) 00:17:04.256 fused_ordering(902) 00:17:04.256 fused_ordering(903) 00:17:04.256 fused_ordering(904) 00:17:04.256 fused_ordering(905) 00:17:04.256 fused_ordering(906) 00:17:04.256 fused_ordering(907) 00:17:04.256 fused_ordering(908) 00:17:04.256 fused_ordering(909) 00:17:04.256 fused_ordering(910) 00:17:04.256 fused_ordering(911) 00:17:04.256 fused_ordering(912) 00:17:04.256 fused_ordering(913) 00:17:04.256 fused_ordering(914) 00:17:04.256 fused_ordering(915) 00:17:04.256 fused_ordering(916) 00:17:04.256 fused_ordering(917) 00:17:04.256 fused_ordering(918) 00:17:04.256 fused_ordering(919) 00:17:04.256 fused_ordering(920) 00:17:04.256 fused_ordering(921) 00:17:04.256 fused_ordering(922) 00:17:04.256 fused_ordering(923) 00:17:04.256 fused_ordering(924) 00:17:04.256 fused_ordering(925) 00:17:04.256 fused_ordering(926) 00:17:04.256 fused_ordering(927) 00:17:04.256 fused_ordering(928) 00:17:04.256 fused_ordering(929) 00:17:04.256 fused_ordering(930) 00:17:04.256 fused_ordering(931) 00:17:04.256 fused_ordering(932) 00:17:04.256 fused_ordering(933) 00:17:04.256 fused_ordering(934) 00:17:04.256 fused_ordering(935) 00:17:04.256 fused_ordering(936) 00:17:04.256 fused_ordering(937) 00:17:04.256 fused_ordering(938) 00:17:04.256 fused_ordering(939) 00:17:04.256 fused_ordering(940) 00:17:04.256 fused_ordering(941) 00:17:04.256 fused_ordering(942) 00:17:04.256 fused_ordering(943) 00:17:04.256 fused_ordering(944) 00:17:04.256 fused_ordering(945) 00:17:04.256 fused_ordering(946) 00:17:04.256 fused_ordering(947) 00:17:04.256 fused_ordering(948) 00:17:04.256 fused_ordering(949) 00:17:04.256 fused_ordering(950) 00:17:04.256 fused_ordering(951) 00:17:04.256 fused_ordering(952) 00:17:04.256 fused_ordering(953) 00:17:04.256 fused_ordering(954) 00:17:04.256 fused_ordering(955) 00:17:04.256 fused_ordering(956) 00:17:04.256 fused_ordering(957) 00:17:04.256 fused_ordering(958) 00:17:04.256 fused_ordering(959) 00:17:04.256 fused_ordering(960) 00:17:04.256 fused_ordering(961) 00:17:04.256 fused_ordering(962) 00:17:04.256 fused_ordering(963) 00:17:04.256 fused_ordering(964) 00:17:04.256 fused_ordering(965) 00:17:04.256 fused_ordering(966) 00:17:04.256 fused_ordering(967) 00:17:04.256 fused_ordering(968) 00:17:04.256 fused_ordering(969) 00:17:04.256 fused_ordering(970) 00:17:04.256 fused_ordering(971) 00:17:04.256 fused_ordering(972) 00:17:04.256 fused_ordering(973) 00:17:04.256 fused_ordering(974) 00:17:04.256 fused_ordering(975) 00:17:04.256 fused_ordering(976) 00:17:04.256 fused_ordering(977) 00:17:04.256 fused_ordering(978) 00:17:04.256 fused_ordering(979) 00:17:04.256 fused_ordering(980) 00:17:04.256 fused_ordering(981) 00:17:04.256 fused_ordering(982) 00:17:04.256 fused_ordering(983) 00:17:04.256 fused_ordering(984) 00:17:04.256 fused_ordering(985) 00:17:04.256 fused_ordering(986) 00:17:04.256 fused_ordering(987) 00:17:04.256 fused_ordering(988) 00:17:04.256 fused_ordering(989) 00:17:04.256 fused_ordering(990) 00:17:04.256 fused_ordering(991) 00:17:04.256 fused_ordering(992) 00:17:04.256 fused_ordering(993) 00:17:04.256 fused_ordering(994) 00:17:04.256 fused_ordering(995) 00:17:04.256 fused_ordering(996) 00:17:04.256 fused_ordering(997) 00:17:04.256 fused_ordering(998) 00:17:04.256 fused_ordering(999) 00:17:04.256 fused_ordering(1000) 00:17:04.256 fused_ordering(1001) 00:17:04.256 fused_ordering(1002) 00:17:04.256 fused_ordering(1003) 00:17:04.256 fused_ordering(1004) 00:17:04.256 fused_ordering(1005) 00:17:04.256 fused_ordering(1006) 00:17:04.256 fused_ordering(1007) 00:17:04.256 fused_ordering(1008) 00:17:04.256 fused_ordering(1009) 00:17:04.256 fused_ordering(1010) 00:17:04.256 fused_ordering(1011) 00:17:04.256 fused_ordering(1012) 00:17:04.256 fused_ordering(1013) 00:17:04.256 fused_ordering(1014) 00:17:04.256 fused_ordering(1015) 00:17:04.256 fused_ordering(1016) 00:17:04.256 fused_ordering(1017) 00:17:04.256 fused_ordering(1018) 00:17:04.256 fused_ordering(1019) 00:17:04.256 fused_ordering(1020) 00:17:04.256 fused_ordering(1021) 00:17:04.256 fused_ordering(1022) 00:17:04.256 fused_ordering(1023) 00:17:04.256 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:04.256 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.257 rmmod nvme_tcp 00:17:04.257 rmmod nvme_fabrics 00:17:04.257 rmmod nvme_keyring 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 952572 ']' 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 952572 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 952572 ']' 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 952572 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.257 12:57:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952572 00:17:04.257 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:04.257 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:04.257 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952572' 00:17:04.257 killing process with pid 952572 00:17:04.257 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 952572 00:17:04.257 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 952572 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.516 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:06.421 00:17:06.421 real 0m10.595s 00:17:06.421 user 0m5.032s 00:17:06.421 sys 0m5.702s 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 ************************************ 00:17:06.421 END TEST nvmf_fused_ordering 00:17:06.421 ************************************ 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.421 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 ************************************ 00:17:06.681 START TEST nvmf_ns_masking 00:17:06.681 ************************************ 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:06.681 * Looking for test storage... 00:17:06.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:06.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.681 --rc genhtml_branch_coverage=1 00:17:06.681 --rc genhtml_function_coverage=1 00:17:06.681 --rc genhtml_legend=1 00:17:06.681 --rc geninfo_all_blocks=1 00:17:06.681 --rc geninfo_unexecuted_blocks=1 00:17:06.681 00:17:06.681 ' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:06.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.681 --rc genhtml_branch_coverage=1 00:17:06.681 --rc genhtml_function_coverage=1 00:17:06.681 --rc genhtml_legend=1 00:17:06.681 --rc geninfo_all_blocks=1 00:17:06.681 --rc geninfo_unexecuted_blocks=1 00:17:06.681 00:17:06.681 ' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:06.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.681 --rc genhtml_branch_coverage=1 00:17:06.681 --rc genhtml_function_coverage=1 00:17:06.681 --rc genhtml_legend=1 00:17:06.681 --rc geninfo_all_blocks=1 00:17:06.681 --rc geninfo_unexecuted_blocks=1 00:17:06.681 00:17:06.681 ' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:06.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.681 --rc genhtml_branch_coverage=1 00:17:06.681 --rc genhtml_function_coverage=1 00:17:06.681 --rc genhtml_legend=1 00:17:06.681 --rc geninfo_all_blocks=1 00:17:06.681 --rc geninfo_unexecuted_blocks=1 00:17:06.681 00:17:06.681 ' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.681 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fe5550f6-0b2d-470a-8d14-05f41858cf9a 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4fe6474a-7fc9-4fa1-a822-e5d98206f6ef 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1c45dd97-c494-477e-b9f8-d0d458af8010 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.682 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:13.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:13.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:13.252 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:13.253 Found net devices under 0000:af:00.0: cvl_0_0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:13.253 Found net devices under 0000:af:00.1: cvl_0_1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:13.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:17:13.253 00:17:13.253 --- 10.0.0.2 ping statistics --- 00:17:13.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.253 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:17:13.253 00:17:13.253 --- 10.0.0.1 ping statistics --- 00:17:13.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.253 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=956501 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 956501 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 956501 ']' 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.253 [2024-12-15 12:57:20.674169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:13.253 [2024-12-15 12:57:20.674212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.253 [2024-12-15 12:57:20.753273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.253 [2024-12-15 12:57:20.774400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.253 [2024-12-15 12:57:20.774440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.253 [2024-12-15 12:57:20.774447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.253 [2024-12-15 12:57:20.774453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.253 [2024-12-15 12:57:20.774458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.253 [2024-12-15 12:57:20.774960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.253 12:57:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:13.253 [2024-12-15 12:57:21.074117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.253 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:13.253 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:13.253 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:13.516 Malloc1 00:17:13.516 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:13.816 Malloc2 00:17:13.816 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.121 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:14.121 12:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.380 [2024-12-15 12:57:22.096575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c45dd97-c494-477e-b9f8-d0d458af8010 -a 10.0.0.2 -s 4420 -i 4 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.380 12:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.915 [ 0]:0x1 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d1d384580d84a9091e133067bcafa92 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d1d384580d84a9091e133067bcafa92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.915 [ 0]:0x1 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d1d384580d84a9091e133067bcafa92 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d1d384580d84a9091e133067bcafa92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:16.915 [ 1]:0x2 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.915 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.174 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c45dd97-c494-477e-b9f8-d0d458af8010 -a 10.0.0.2 -s 4420 -i 4 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:17.432 12:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:19.965 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:19.966 [ 0]:0x2 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.966 [ 0]:0x1 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d1d384580d84a9091e133067bcafa92 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d1d384580d84a9091e133067bcafa92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:19.966 [ 1]:0x2 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.966 12:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.225 [ 0]:0x2 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.225 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.484 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:20.484 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.484 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:20.484 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.484 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1c45dd97-c494-477e-b9f8-d0d458af8010 -a 10.0.0.2 -s 4420 -i 4 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:20.742 12:57:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.276 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.276 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.276 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.276 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:23.276 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.277 [ 0]:0x1 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d1d384580d84a9091e133067bcafa92 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d1d384580d84a9091e133067bcafa92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.277 [ 1]:0x2 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.277 12:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.277 [ 0]:0x2 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.277 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.536 [2024-12-15 12:57:31.367022] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:23.536 request: 00:17:23.536 { 00:17:23.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.536 "nsid": 2, 00:17:23.536 "host": "nqn.2016-06.io.spdk:host1", 00:17:23.536 "method": "nvmf_ns_remove_host", 00:17:23.536 "req_id": 1 00:17:23.536 } 00:17:23.536 Got JSON-RPC error response 00:17:23.536 response: 00:17:23.536 { 00:17:23.536 "code": -32602, 00:17:23.536 "message": "Invalid parameters" 00:17:23.536 } 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.536 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.795 [ 0]:0x2 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a06d9ab6159f4b8c83176fcc754bea54 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a06d9ab6159f4b8c83176fcc754bea54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.795 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:23.796 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=958442 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 958442 /var/tmp/host.sock 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 958442 ']' 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:24.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.055 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:24.055 [2024-12-15 12:57:31.788299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:24.055 [2024-12-15 12:57:31.788347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958442 ] 00:17:24.055 [2024-12-15 12:57:31.863717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.055 [2024-12-15 12:57:31.885344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.314 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.314 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:24.314 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:24.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fe5550f6-0b2d-470a-8d14-05f41858cf9a 00:17:24.572 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:24.831 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FE5550F60B2D470A8D1405F41858CF9A -i 00:17:24.831 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4fe6474a-7fc9-4fa1-a822-e5d98206f6ef 00:17:24.831 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:24.831 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4FE6474A7FC94FA1A822E5D98206F6EF -i 00:17:25.090 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.349 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:25.608 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:25.608 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:25.867 nvme0n1 00:17:25.867 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:25.867 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:26.126 nvme1n2 00:17:26.126 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:26.126 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:26.126 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:26.126 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:26.126 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:26.384 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:26.384 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:26.384 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:26.384 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:26.643 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fe5550f6-0b2d-470a-8d14-05f41858cf9a == \f\e\5\5\5\0\f\6\-\0\b\2\d\-\4\7\0\a\-\8\d\1\4\-\0\5\f\4\1\8\5\8\c\f\9\a ]] 00:17:26.643 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:26.643 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:26.643 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:26.902 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4fe6474a-7fc9-4fa1-a822-e5d98206f6ef == \4\f\e\6\4\7\4\a\-\7\f\c\9\-\4\f\a\1\-\a\8\2\2\-\e\5\d\9\8\2\0\6\f\6\e\f ]] 00:17:26.902 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.902 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid fe5550f6-0b2d-470a-8d14-05f41858cf9a 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FE5550F60B2D470A8D1405F41858CF9A 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FE5550F60B2D470A8D1405F41858CF9A 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:27.161 12:57:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FE5550F60B2D470A8D1405F41858CF9A 00:17:27.419 [2024-12-15 12:57:35.141357] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:27.419 [2024-12-15 12:57:35.141389] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:27.419 [2024-12-15 12:57:35.141397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.419 request: 00:17:27.419 { 00:17:27.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.419 "namespace": { 00:17:27.419 "bdev_name": "invalid", 00:17:27.419 "nsid": 1, 00:17:27.419 "nguid": "FE5550F60B2D470A8D1405F41858CF9A", 00:17:27.419 "no_auto_visible": false, 00:17:27.419 "hide_metadata": false 00:17:27.419 }, 00:17:27.419 "method": "nvmf_subsystem_add_ns", 00:17:27.419 "req_id": 1 00:17:27.419 } 00:17:27.419 Got JSON-RPC error response 00:17:27.419 response: 00:17:27.419 { 00:17:27.419 "code": -32602, 00:17:27.419 "message": "Invalid parameters" 00:17:27.419 } 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid fe5550f6-0b2d-470a-8d14-05f41858cf9a 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:27.419 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FE5550F60B2D470A8D1405F41858CF9A -i 00:17:27.676 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:29.579 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:29.579 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:29.579 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 958442 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 958442 ']' 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 958442 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958442 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958442' 00:17:29.839 killing process with pid 958442 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 958442 00:17:29.839 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 958442 00:17:30.098 12:57:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.357 rmmod nvme_tcp 00:17:30.357 rmmod nvme_fabrics 00:17:30.357 rmmod nvme_keyring 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 956501 ']' 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 956501 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 956501 ']' 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 956501 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956501 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956501' 00:17:30.357 killing process with pid 956501 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 956501 00:17:30.357 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 956501 00:17:30.616 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.616 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.617 12:57:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.154 00:17:33.154 real 0m26.177s 00:17:33.154 user 0m31.164s 00:17:33.154 sys 0m7.123s 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:33.154 ************************************ 00:17:33.154 END TEST nvmf_ns_masking 00:17:33.154 ************************************ 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.154 ************************************ 00:17:33.154 START TEST nvmf_nvme_cli 00:17:33.154 ************************************ 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:33.154 * Looking for test storage... 00:17:33.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.154 --rc genhtml_branch_coverage=1 00:17:33.154 --rc genhtml_function_coverage=1 00:17:33.154 --rc genhtml_legend=1 00:17:33.154 --rc geninfo_all_blocks=1 00:17:33.154 --rc geninfo_unexecuted_blocks=1 00:17:33.154 00:17:33.154 ' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.154 --rc genhtml_branch_coverage=1 00:17:33.154 --rc genhtml_function_coverage=1 00:17:33.154 --rc genhtml_legend=1 00:17:33.154 --rc geninfo_all_blocks=1 00:17:33.154 --rc geninfo_unexecuted_blocks=1 00:17:33.154 00:17:33.154 ' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.154 --rc genhtml_branch_coverage=1 00:17:33.154 --rc genhtml_function_coverage=1 00:17:33.154 --rc genhtml_legend=1 00:17:33.154 --rc geninfo_all_blocks=1 00:17:33.154 --rc geninfo_unexecuted_blocks=1 00:17:33.154 00:17:33.154 ' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.154 --rc genhtml_branch_coverage=1 00:17:33.154 --rc genhtml_function_coverage=1 00:17:33.154 --rc genhtml_legend=1 00:17:33.154 --rc geninfo_all_blocks=1 00:17:33.154 --rc geninfo_unexecuted_blocks=1 00:17:33.154 00:17:33.154 ' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.154 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.155 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.728 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.729 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.729 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:17:39.729 00:17:39.729 --- 10.0.0.2 ping statistics --- 00:17:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.729 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:17:39.729 00:17:39.729 --- 10.0.0.1 ping statistics --- 00:17:39.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.729 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=962878 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 962878 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 962878 ']' 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 [2024-12-15 12:57:46.789226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:39.729 [2024-12-15 12:57:46.789273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.729 [2024-12-15 12:57:46.855745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.729 [2024-12-15 12:57:46.880456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.729 [2024-12-15 12:57:46.880491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.729 [2024-12-15 12:57:46.880501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.729 [2024-12-15 12:57:46.880507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.729 [2024-12-15 12:57:46.880512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.729 [2024-12-15 12:57:46.881865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.729 [2024-12-15 12:57:46.885841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.729 [2024-12-15 12:57:46.885879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.729 [2024-12-15 12:57:46.885880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.729 12:57:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 [2024-12-15 12:57:47.034351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.729 Malloc0 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.729 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 Malloc1 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 [2024-12-15 12:57:47.125073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:39.730 00:17:39.730 Discovery Log Number of Records 2, Generation counter 2 00:17:39.730 =====Discovery Log Entry 0====== 00:17:39.730 trtype: tcp 00:17:39.730 adrfam: ipv4 00:17:39.730 subtype: current discovery subsystem 00:17:39.730 treq: not required 00:17:39.730 portid: 0 00:17:39.730 trsvcid: 4420 00:17:39.730 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.730 traddr: 10.0.0.2 00:17:39.730 eflags: explicit discovery connections, duplicate discovery information 00:17:39.730 sectype: none 00:17:39.730 =====Discovery Log Entry 1====== 00:17:39.730 trtype: tcp 00:17:39.730 adrfam: ipv4 00:17:39.730 subtype: nvme subsystem 00:17:39.730 treq: not required 00:17:39.730 portid: 0 00:17:39.730 trsvcid: 4420 00:17:39.730 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:39.730 traddr: 10.0.0.2 00:17:39.730 eflags: none 00:17:39.730 sectype: none 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:39.730 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:40.666 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:40.666 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.666 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.666 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:40.667 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:40.667 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:42.571 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:42.571 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:42.571 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:42.830 /dev/nvme0n2 ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.830 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.831 rmmod nvme_tcp 00:17:42.831 rmmod nvme_fabrics 00:17:42.831 rmmod nvme_keyring 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 962878 ']' 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 962878 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 962878 ']' 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 962878 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:42.831 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962878 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962878' 00:17:43.090 killing process with pid 962878 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 962878 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 962878 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.090 12:57:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:45.625 00:17:45.625 real 0m12.463s 00:17:45.625 user 0m18.042s 00:17:45.625 sys 0m5.060s 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:45.625 ************************************ 00:17:45.625 END TEST nvmf_nvme_cli 00:17:45.625 ************************************ 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.625 ************************************ 00:17:45.625 START TEST nvmf_vfio_user 00:17:45.625 ************************************ 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.625 * Looking for test storage... 00:17:45.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.625 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.625 --rc genhtml_branch_coverage=1 00:17:45.625 --rc genhtml_function_coverage=1 00:17:45.625 --rc genhtml_legend=1 00:17:45.625 --rc geninfo_all_blocks=1 00:17:45.625 --rc geninfo_unexecuted_blocks=1 00:17:45.625 00:17:45.625 ' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.626 --rc genhtml_branch_coverage=1 00:17:45.626 --rc genhtml_function_coverage=1 00:17:45.626 --rc genhtml_legend=1 00:17:45.626 --rc geninfo_all_blocks=1 00:17:45.626 --rc geninfo_unexecuted_blocks=1 00:17:45.626 00:17:45.626 ' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.626 --rc genhtml_branch_coverage=1 00:17:45.626 --rc genhtml_function_coverage=1 00:17:45.626 --rc genhtml_legend=1 00:17:45.626 --rc geninfo_all_blocks=1 00:17:45.626 --rc geninfo_unexecuted_blocks=1 00:17:45.626 00:17:45.626 ' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.626 --rc genhtml_branch_coverage=1 00:17:45.626 --rc genhtml_function_coverage=1 00:17:45.626 --rc genhtml_legend=1 00:17:45.626 --rc geninfo_all_blocks=1 00:17:45.626 --rc geninfo_unexecuted_blocks=1 00:17:45.626 00:17:45.626 ' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=964132 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 964132' 00:17:45.626 Process pid: 964132 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 964132 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 964132 ']' 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.626 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:45.626 [2024-12-15 12:57:53.380266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:45.626 [2024-12-15 12:57:53.380313] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.626 [2024-12-15 12:57:53.453894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.626 [2024-12-15 12:57:53.476677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.626 [2024-12-15 12:57:53.476717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.626 [2024-12-15 12:57:53.476724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.626 [2024-12-15 12:57:53.476730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.626 [2024-12-15 12:57:53.476736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.626 [2024-12-15 12:57:53.478176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.626 [2024-12-15 12:57:53.478287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.626 [2024-12-15 12:57:53.478396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.626 [2024-12-15 12:57:53.478396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.885 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.885 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:45.885 12:57:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:46.821 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:47.080 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:47.080 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:47.080 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.080 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:47.080 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:47.080 Malloc1 00:17:47.342 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:47.342 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:47.602 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:47.859 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.859 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:47.859 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:48.118 Malloc2 00:17:48.118 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:48.118 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:48.376 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:48.637 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:48.637 [2024-12-15 12:57:56.449579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:48.637 [2024-12-15 12:57:56.449613] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964606 ] 00:17:48.637 [2024-12-15 12:57:56.490266] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:48.637 [2024-12-15 12:57:56.495652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:48.637 [2024-12-15 12:57:56.495672] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8637df4000 00:17:48.637 [2024-12-15 12:57:56.496650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.497648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.498654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.499662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.500661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.501664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.502672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.503678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:48.637 [2024-12-15 12:57:56.504689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:48.637 [2024-12-15 12:57:56.504698] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8636afe000 00:17:48.637 [2024-12-15 12:57:56.505612] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:48.637 [2024-12-15 12:57:56.515058] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:48.637 [2024-12-15 12:57:56.515081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:48.637 [2024-12-15 12:57:56.520806] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:48.637 [2024-12-15 12:57:56.520847] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:48.637 [2024-12-15 12:57:56.520923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:48.637 [2024-12-15 12:57:56.520940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:48.637 [2024-12-15 12:57:56.520948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:48.637 [2024-12-15 12:57:56.521801] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:48.637 [2024-12-15 12:57:56.521809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:48.637 [2024-12-15 12:57:56.521815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:48.637 [2024-12-15 12:57:56.522807] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:48.637 [2024-12-15 12:57:56.522814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:48.637 [2024-12-15 12:57:56.522821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.523814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:48.637 [2024-12-15 12:57:56.523821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.524817] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:48.637 [2024-12-15 12:57:56.524827] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:48.637 [2024-12-15 12:57:56.524832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.524838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.524945] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:48.637 [2024-12-15 12:57:56.524949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.524954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:48.637 [2024-12-15 12:57:56.525828] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:48.637 [2024-12-15 12:57:56.526837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:48.637 [2024-12-15 12:57:56.527843] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:48.637 [2024-12-15 12:57:56.528840] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:48.637 [2024-12-15 12:57:56.528906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:48.637 [2024-12-15 12:57:56.529851] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:48.637 [2024-12-15 12:57:56.529858] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:48.637 [2024-12-15 12:57:56.529862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:48.637 [2024-12-15 12:57:56.529881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:48.637 [2024-12-15 12:57:56.529887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:48.637 [2024-12-15 12:57:56.529901] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.637 [2024-12-15 12:57:56.529906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.637 [2024-12-15 12:57:56.529909] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.637 [2024-12-15 12:57:56.529922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.637 [2024-12-15 12:57:56.529965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:48.637 [2024-12-15 12:57:56.529973] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:48.637 [2024-12-15 12:57:56.529977] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:48.637 [2024-12-15 12:57:56.529981] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:48.637 [2024-12-15 12:57:56.529985] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:48.637 [2024-12-15 12:57:56.529990] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:48.637 [2024-12-15 12:57:56.529994] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:48.638 [2024-12-15 12:57:56.529998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.638 [2024-12-15 12:57:56.530050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.638 [2024-12-15 12:57:56.530057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.638 [2024-12-15 12:57:56.530065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.638 [2024-12-15 12:57:56.530069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530101] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:48.638 [2024-12-15 12:57:56.530107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:48.638 [2024-12-15 12:57:56.530204] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:48.638 [2024-12-15 12:57:56.530207] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530230] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:48.638 [2024-12-15 12:57:56.530241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.638 [2024-12-15 12:57:56.530257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.638 [2024-12-15 12:57:56.530260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530309] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:48.638 [2024-12-15 12:57:56.530313] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.638 [2024-12-15 12:57:56.530316] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530369] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:48.638 [2024-12-15 12:57:56.530373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:48.638 [2024-12-15 12:57:56.530378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:48.638 [2024-12-15 12:57:56.530394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:48.638 [2024-12-15 12:57:56.530475] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:48.638 [2024-12-15 12:57:56.530478] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:48.638 [2024-12-15 12:57:56.530482] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:48.638 [2024-12-15 12:57:56.530485] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:48.638 [2024-12-15 12:57:56.530490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:48.638 [2024-12-15 12:57:56.530496] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:48.638 [2024-12-15 12:57:56.530500] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:48.638 [2024-12-15 12:57:56.530504] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:48.638 [2024-12-15 12:57:56.530519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:48.638 [2024-12-15 12:57:56.530522] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:48.638 [2024-12-15 12:57:56.530538] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:48.638 [2024-12-15 12:57:56.530541] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:48.638 [2024-12-15 12:57:56.530546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:48.638 [2024-12-15 12:57:56.530552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:48.638 [2024-12-15 12:57:56.530578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:48.638 ===================================================== 00:17:48.638 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:48.638 ===================================================== 00:17:48.638 Controller Capabilities/Features 00:17:48.638 ================================ 00:17:48.638 Vendor ID: 4e58 00:17:48.638 Subsystem Vendor ID: 4e58 00:17:48.638 Serial Number: SPDK1 00:17:48.638 Model Number: SPDK bdev Controller 00:17:48.639 Firmware Version: 25.01 00:17:48.639 Recommended Arb Burst: 6 00:17:48.639 IEEE OUI Identifier: 8d 6b 50 00:17:48.639 Multi-path I/O 00:17:48.639 May have multiple subsystem ports: Yes 00:17:48.639 May have multiple controllers: Yes 00:17:48.639 Associated with SR-IOV VF: No 00:17:48.639 Max Data Transfer Size: 131072 00:17:48.639 Max Number of Namespaces: 32 00:17:48.639 Max Number of I/O Queues: 127 00:17:48.639 NVMe Specification Version (VS): 1.3 00:17:48.639 NVMe Specification Version (Identify): 1.3 00:17:48.639 Maximum Queue Entries: 256 00:17:48.639 Contiguous Queues Required: Yes 00:17:48.639 Arbitration Mechanisms Supported 00:17:48.639 Weighted Round Robin: Not Supported 00:17:48.639 Vendor Specific: Not Supported 00:17:48.639 Reset Timeout: 15000 ms 00:17:48.639 Doorbell Stride: 4 bytes 00:17:48.639 NVM Subsystem Reset: Not Supported 00:17:48.639 Command Sets Supported 00:17:48.639 NVM Command Set: Supported 00:17:48.639 Boot Partition: Not Supported 00:17:48.639 Memory Page Size Minimum: 4096 bytes 00:17:48.639 Memory Page Size Maximum: 4096 bytes 00:17:48.639 Persistent Memory Region: Not Supported 00:17:48.639 Optional Asynchronous Events Supported 00:17:48.639 Namespace Attribute Notices: Supported 00:17:48.639 Firmware Activation Notices: Not Supported 00:17:48.639 ANA Change Notices: Not Supported 00:17:48.639 PLE Aggregate Log Change Notices: Not Supported 00:17:48.639 LBA Status Info Alert Notices: Not Supported 00:17:48.639 EGE Aggregate Log Change Notices: Not Supported 00:17:48.639 Normal NVM Subsystem Shutdown event: Not Supported 00:17:48.639 Zone Descriptor Change Notices: Not Supported 00:17:48.639 Discovery Log Change Notices: Not Supported 00:17:48.639 Controller Attributes 00:17:48.639 128-bit Host Identifier: Supported 00:17:48.639 Non-Operational Permissive Mode: Not Supported 00:17:48.639 NVM Sets: Not Supported 00:17:48.639 Read Recovery Levels: Not Supported 00:17:48.639 Endurance Groups: Not Supported 00:17:48.639 Predictable Latency Mode: Not Supported 00:17:48.639 Traffic Based Keep ALive: Not Supported 00:17:48.639 Namespace Granularity: Not Supported 00:17:48.639 SQ Associations: Not Supported 00:17:48.639 UUID List: Not Supported 00:17:48.639 Multi-Domain Subsystem: Not Supported 00:17:48.639 Fixed Capacity Management: Not Supported 00:17:48.639 Variable Capacity Management: Not Supported 00:17:48.639 Delete Endurance Group: Not Supported 00:17:48.639 Delete NVM Set: Not Supported 00:17:48.639 Extended LBA Formats Supported: Not Supported 00:17:48.639 Flexible Data Placement Supported: Not Supported 00:17:48.639 00:17:48.639 Controller Memory Buffer Support 00:17:48.639 ================================ 00:17:48.639 Supported: No 00:17:48.639 00:17:48.639 Persistent Memory Region Support 00:17:48.639 ================================ 00:17:48.639 Supported: No 00:17:48.639 00:17:48.639 Admin Command Set Attributes 00:17:48.639 ============================ 00:17:48.639 Security Send/Receive: Not Supported 00:17:48.639 Format NVM: Not Supported 00:17:48.639 Firmware Activate/Download: Not Supported 00:17:48.639 Namespace Management: Not Supported 00:17:48.639 Device Self-Test: Not Supported 00:17:48.639 Directives: Not Supported 00:17:48.639 NVMe-MI: Not Supported 00:17:48.639 Virtualization Management: Not Supported 00:17:48.639 Doorbell Buffer Config: Not Supported 00:17:48.639 Get LBA Status Capability: Not Supported 00:17:48.639 Command & Feature Lockdown Capability: Not Supported 00:17:48.639 Abort Command Limit: 4 00:17:48.639 Async Event Request Limit: 4 00:17:48.639 Number of Firmware Slots: N/A 00:17:48.639 Firmware Slot 1 Read-Only: N/A 00:17:48.639 Firmware Activation Without Reset: N/A 00:17:48.639 Multiple Update Detection Support: N/A 00:17:48.639 Firmware Update Granularity: No Information Provided 00:17:48.639 Per-Namespace SMART Log: No 00:17:48.639 Asymmetric Namespace Access Log Page: Not Supported 00:17:48.639 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:48.639 Command Effects Log Page: Supported 00:17:48.639 Get Log Page Extended Data: Supported 00:17:48.639 Telemetry Log Pages: Not Supported 00:17:48.639 Persistent Event Log Pages: Not Supported 00:17:48.639 Supported Log Pages Log Page: May Support 00:17:48.639 Commands Supported & Effects Log Page: Not Supported 00:17:48.639 Feature Identifiers & Effects Log Page:May Support 00:17:48.639 NVMe-MI Commands & Effects Log Page: May Support 00:17:48.639 Data Area 4 for Telemetry Log: Not Supported 00:17:48.639 Error Log Page Entries Supported: 128 00:17:48.639 Keep Alive: Supported 00:17:48.639 Keep Alive Granularity: 10000 ms 00:17:48.639 00:17:48.639 NVM Command Set Attributes 00:17:48.639 ========================== 00:17:48.639 Submission Queue Entry Size 00:17:48.639 Max: 64 00:17:48.639 Min: 64 00:17:48.639 Completion Queue Entry Size 00:17:48.639 Max: 16 00:17:48.639 Min: 16 00:17:48.639 Number of Namespaces: 32 00:17:48.639 Compare Command: Supported 00:17:48.639 Write Uncorrectable Command: Not Supported 00:17:48.639 Dataset Management Command: Supported 00:17:48.639 Write Zeroes Command: Supported 00:17:48.639 Set Features Save Field: Not Supported 00:17:48.639 Reservations: Not Supported 00:17:48.639 Timestamp: Not Supported 00:17:48.639 Copy: Supported 00:17:48.639 Volatile Write Cache: Present 00:17:48.639 Atomic Write Unit (Normal): 1 00:17:48.639 Atomic Write Unit (PFail): 1 00:17:48.639 Atomic Compare & Write Unit: 1 00:17:48.639 Fused Compare & Write: Supported 00:17:48.639 Scatter-Gather List 00:17:48.639 SGL Command Set: Supported (Dword aligned) 00:17:48.639 SGL Keyed: Not Supported 00:17:48.639 SGL Bit Bucket Descriptor: Not Supported 00:17:48.639 SGL Metadata Pointer: Not Supported 00:17:48.639 Oversized SGL: Not Supported 00:17:48.639 SGL Metadata Address: Not Supported 00:17:48.639 SGL Offset: Not Supported 00:17:48.639 Transport SGL Data Block: Not Supported 00:17:48.639 Replay Protected Memory Block: Not Supported 00:17:48.639 00:17:48.639 Firmware Slot Information 00:17:48.639 ========================= 00:17:48.639 Active slot: 1 00:17:48.639 Slot 1 Firmware Revision: 25.01 00:17:48.639 00:17:48.639 00:17:48.639 Commands Supported and Effects 00:17:48.639 ============================== 00:17:48.639 Admin Commands 00:17:48.639 -------------- 00:17:48.639 Get Log Page (02h): Supported 00:17:48.639 Identify (06h): Supported 00:17:48.639 Abort (08h): Supported 00:17:48.639 Set Features (09h): Supported 00:17:48.639 Get Features (0Ah): Supported 00:17:48.639 Asynchronous Event Request (0Ch): Supported 00:17:48.639 Keep Alive (18h): Supported 00:17:48.639 I/O Commands 00:17:48.639 ------------ 00:17:48.639 Flush (00h): Supported LBA-Change 00:17:48.639 Write (01h): Supported LBA-Change 00:17:48.639 Read (02h): Supported 00:17:48.639 Compare (05h): Supported 00:17:48.639 Write Zeroes (08h): Supported LBA-Change 00:17:48.639 Dataset Management (09h): Supported LBA-Change 00:17:48.639 Copy (19h): Supported LBA-Change 00:17:48.639 00:17:48.639 Error Log 00:17:48.639 ========= 00:17:48.639 00:17:48.639 Arbitration 00:17:48.639 =========== 00:17:48.639 Arbitration Burst: 1 00:17:48.639 00:17:48.639 Power Management 00:17:48.639 ================ 00:17:48.639 Number of Power States: 1 00:17:48.639 Current Power State: Power State #0 00:17:48.639 Power State #0: 00:17:48.639 Max Power: 0.00 W 00:17:48.639 Non-Operational State: Operational 00:17:48.639 Entry Latency: Not Reported 00:17:48.639 Exit Latency: Not Reported 00:17:48.639 Relative Read Throughput: 0 00:17:48.639 Relative Read Latency: 0 00:17:48.639 Relative Write Throughput: 0 00:17:48.639 Relative Write Latency: 0 00:17:48.639 Idle Power: Not Reported 00:17:48.639 Active Power: Not Reported 00:17:48.639 Non-Operational Permissive Mode: Not Supported 00:17:48.639 00:17:48.639 Health Information 00:17:48.639 ================== 00:17:48.639 Critical Warnings: 00:17:48.639 Available Spare Space: OK 00:17:48.639 Temperature: OK 00:17:48.639 Device Reliability: OK 00:17:48.639 Read Only: No 00:17:48.639 Volatile Memory Backup: OK 00:17:48.639 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:48.639 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:48.639 Available Spare: 0% 00:17:48.639 Available Sp[2024-12-15 12:57:56.530660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:48.639 [2024-12-15 12:57:56.530667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:48.639 [2024-12-15 12:57:56.530691] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:48.639 [2024-12-15 12:57:56.530700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.639 [2024-12-15 12:57:56.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.639 [2024-12-15 12:57:56.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.639 [2024-12-15 12:57:56.530716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.640 [2024-12-15 12:57:56.534835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:48.640 [2024-12-15 12:57:56.534847] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:48.640 [2024-12-15 12:57:56.534881] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:48.640 [2024-12-15 12:57:56.534928] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:48.640 [2024-12-15 12:57:56.534934] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:48.640 [2024-12-15 12:57:56.535884] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:48.640 [2024-12-15 12:57:56.535896] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:48.640 [2024-12-15 12:57:56.535946] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:48.640 [2024-12-15 12:57:56.536910] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:48.899 are Threshold: 0% 00:17:48.899 Life Percentage Used: 0% 00:17:48.899 Data Units Read: 0 00:17:48.899 Data Units Written: 0 00:17:48.899 Host Read Commands: 0 00:17:48.899 Host Write Commands: 0 00:17:48.899 Controller Busy Time: 0 minutes 00:17:48.899 Power Cycles: 0 00:17:48.899 Power On Hours: 0 hours 00:17:48.899 Unsafe Shutdowns: 0 00:17:48.899 Unrecoverable Media Errors: 0 00:17:48.899 Lifetime Error Log Entries: 0 00:17:48.899 Warning Temperature Time: 0 minutes 00:17:48.899 Critical Temperature Time: 0 minutes 00:17:48.899 00:17:48.899 Number of Queues 00:17:48.899 ================ 00:17:48.899 Number of I/O Submission Queues: 127 00:17:48.899 Number of I/O Completion Queues: 127 00:17:48.899 00:17:48.899 Active Namespaces 00:17:48.899 ================= 00:17:48.899 Namespace ID:1 00:17:48.899 Error Recovery Timeout: Unlimited 00:17:48.899 Command Set Identifier: NVM (00h) 00:17:48.899 Deallocate: Supported 00:17:48.899 Deallocated/Unwritten Error: Not Supported 00:17:48.899 Deallocated Read Value: Unknown 00:17:48.899 Deallocate in Write Zeroes: Not Supported 00:17:48.899 Deallocated Guard Field: 0xFFFF 00:17:48.899 Flush: Supported 00:17:48.899 Reservation: Supported 00:17:48.899 Namespace Sharing Capabilities: Multiple Controllers 00:17:48.899 Size (in LBAs): 131072 (0GiB) 00:17:48.899 Capacity (in LBAs): 131072 (0GiB) 00:17:48.899 Utilization (in LBAs): 131072 (0GiB) 00:17:48.899 NGUID: 2945159E859F4917B360E3C1B9BE26C9 00:17:48.899 UUID: 2945159e-859f-4917-b360-e3c1b9be26c9 00:17:48.899 Thin Provisioning: Not Supported 00:17:48.899 Per-NS Atomic Units: Yes 00:17:48.899 Atomic Boundary Size (Normal): 0 00:17:48.899 Atomic Boundary Size (PFail): 0 00:17:48.899 Atomic Boundary Offset: 0 00:17:48.899 Maximum Single Source Range Length: 65535 00:17:48.899 Maximum Copy Length: 65535 00:17:48.899 Maximum Source Range Count: 1 00:17:48.899 NGUID/EUI64 Never Reused: No 00:17:48.899 Namespace Write Protected: No 00:17:48.899 Number of LBA Formats: 1 00:17:48.899 Current LBA Format: LBA Format #00 00:17:48.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:48.899 00:17:48.899 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:48.899 [2024-12-15 12:57:56.763869] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.168 Initializing NVMe Controllers 00:17:54.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:54.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:54.168 Initialization complete. Launching workers. 00:17:54.168 ======================================================== 00:17:54.168 Latency(us) 00:17:54.168 Device Information : IOPS MiB/s Average min max 00:17:54.168 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39938.03 156.01 3204.82 964.07 7463.43 00:17:54.168 ======================================================== 00:17:54.168 Total : 39938.03 156.01 3204.82 964.07 7463.43 00:17:54.168 00:17:54.168 [2024-12-15 12:58:01.788721] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.168 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:54.168 [2024-12-15 12:58:02.022791] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:59.441 Initializing NVMe Controllers 00:17:59.441 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:59.441 Initialization complete. Launching workers. 00:17:59.441 ======================================================== 00:17:59.441 Latency(us) 00:17:59.441 Device Information : IOPS MiB/s Average min max 00:17:59.441 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7981.47 6981.37 10975.88 00:17:59.441 ======================================================== 00:17:59.441 Total : 16051.20 62.70 7981.47 6981.37 10975.88 00:17:59.441 00:17:59.441 [2024-12-15 12:58:07.058479] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:59.441 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:59.441 [2024-12-15 12:58:07.260497] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:04.715 [2024-12-15 12:58:12.350219] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:04.715 Initializing NVMe Controllers 00:18:04.715 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.715 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:04.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:04.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:04.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:04.715 Initialization complete. Launching workers. 00:18:04.715 Starting thread on core 2 00:18:04.715 Starting thread on core 3 00:18:04.715 Starting thread on core 1 00:18:04.715 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:04.975 [2024-12-15 12:58:12.643226] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.264 [2024-12-15 12:58:15.719327] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.264 Initializing NVMe Controllers 00:18:08.264 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.264 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.264 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:08.264 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:08.264 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:08.264 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:08.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:08.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:08.264 Initialization complete. Launching workers. 00:18:08.264 Starting thread on core 1 with urgent priority queue 00:18:08.264 Starting thread on core 2 with urgent priority queue 00:18:08.264 Starting thread on core 3 with urgent priority queue 00:18:08.264 Starting thread on core 0 with urgent priority queue 00:18:08.264 SPDK bdev Controller (SPDK1 ) core 0: 7981.33 IO/s 12.53 secs/100000 ios 00:18:08.264 SPDK bdev Controller (SPDK1 ) core 1: 6925.33 IO/s 14.44 secs/100000 ios 00:18:08.264 SPDK bdev Controller (SPDK1 ) core 2: 8481.00 IO/s 11.79 secs/100000 ios 00:18:08.264 SPDK bdev Controller (SPDK1 ) core 3: 8285.33 IO/s 12.07 secs/100000 ios 00:18:08.264 ======================================================== 00:18:08.264 00:18:08.264 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:08.264 [2024-12-15 12:58:16.010236] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.264 Initializing NVMe Controllers 00:18:08.264 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.264 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.264 Namespace ID: 1 size: 0GB 00:18:08.264 Initialization complete. 00:18:08.264 INFO: using host memory buffer for IO 00:18:08.264 Hello world! 00:18:08.264 [2024-12-15 12:58:16.044464] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.264 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:08.523 [2024-12-15 12:58:16.329305] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.460 Initializing NVMe Controllers 00:18:09.460 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.460 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.460 Initialization complete. Launching workers. 00:18:09.460 submit (in ns) avg, min, max = 6776.2, 3124.8, 4002616.2 00:18:09.460 complete (in ns) avg, min, max = 20847.3, 1718.1, 4004896.2 00:18:09.460 00:18:09.460 Submit histogram 00:18:09.460 ================ 00:18:09.460 Range in us Cumulative Count 00:18:09.460 3.124 - 3.139: 0.0061% ( 1) 00:18:09.460 3.139 - 3.154: 0.0305% ( 4) 00:18:09.460 3.154 - 3.170: 0.0671% ( 6) 00:18:09.460 3.170 - 3.185: 0.1953% ( 21) 00:18:09.460 3.185 - 3.200: 0.9764% ( 128) 00:18:09.460 3.200 - 3.215: 3.5152% ( 416) 00:18:09.460 3.215 - 3.230: 8.8673% ( 877) 00:18:09.460 3.230 - 3.246: 14.8602% ( 982) 00:18:09.460 3.246 - 3.261: 21.1705% ( 1034) 00:18:09.460 3.261 - 3.276: 28.8966% ( 1266) 00:18:09.460 3.276 - 3.291: 34.9811% ( 997) 00:18:09.460 3.291 - 3.307: 40.2661% ( 866) 00:18:09.460 3.307 - 3.322: 45.0079% ( 777) 00:18:09.460 3.322 - 3.337: 49.5789% ( 749) 00:18:09.460 3.337 - 3.352: 53.3931% ( 625) 00:18:09.460 3.352 - 3.368: 59.1969% ( 951) 00:18:09.460 3.368 - 3.383: 66.6178% ( 1216) 00:18:09.460 3.383 - 3.398: 71.6770% ( 829) 00:18:09.460 3.398 - 3.413: 77.4991% ( 954) 00:18:09.460 3.413 - 3.429: 81.6124% ( 674) 00:18:09.460 3.429 - 3.444: 84.4929% ( 472) 00:18:09.460 3.444 - 3.459: 86.0552% ( 256) 00:18:09.460 3.459 - 3.474: 87.1476% ( 179) 00:18:09.460 3.474 - 3.490: 87.7395% ( 97) 00:18:09.460 3.490 - 3.505: 88.3132% ( 94) 00:18:09.460 3.505 - 3.520: 88.9784% ( 109) 00:18:09.460 3.520 - 3.535: 89.7840% ( 132) 00:18:09.460 3.535 - 3.550: 90.7482% ( 158) 00:18:09.460 3.550 - 3.566: 91.5843% ( 137) 00:18:09.460 3.566 - 3.581: 92.3837% ( 131) 00:18:09.460 3.581 - 3.596: 93.2564% ( 143) 00:18:09.460 3.596 - 3.611: 94.1719% ( 150) 00:18:09.460 3.611 - 3.627: 95.1300% ( 157) 00:18:09.460 3.627 - 3.642: 96.0759% ( 155) 00:18:09.460 3.642 - 3.657: 96.8937% ( 134) 00:18:09.460 3.657 - 3.672: 97.5955% ( 115) 00:18:09.460 3.672 - 3.688: 98.0776% ( 79) 00:18:09.460 3.688 - 3.703: 98.5658% ( 80) 00:18:09.460 3.703 - 3.718: 98.8710% ( 50) 00:18:09.460 3.718 - 3.733: 99.1151% ( 40) 00:18:09.460 3.733 - 3.749: 99.2799% ( 27) 00:18:09.460 3.749 - 3.764: 99.4141% ( 22) 00:18:09.460 3.764 - 3.779: 99.5057% ( 15) 00:18:09.460 3.779 - 3.794: 99.5545% ( 8) 00:18:09.460 3.794 - 3.810: 99.5911% ( 6) 00:18:09.460 3.810 - 3.825: 99.6216% ( 5) 00:18:09.460 3.825 - 3.840: 99.6399% ( 3) 00:18:09.460 3.840 - 3.855: 99.6521% ( 2) 00:18:09.460 3.870 - 3.886: 99.6705% ( 3) 00:18:09.460 3.886 - 3.901: 99.6766% ( 1) 00:18:09.460 5.090 - 5.120: 99.6827% ( 1) 00:18:09.460 5.150 - 5.181: 99.6888% ( 1) 00:18:09.460 5.242 - 5.272: 99.6949% ( 1) 00:18:09.460 5.303 - 5.333: 99.7010% ( 1) 00:18:09.460 5.364 - 5.394: 99.7071% ( 1) 00:18:09.460 5.394 - 5.425: 99.7132% ( 1) 00:18:09.460 5.425 - 5.455: 99.7254% ( 2) 00:18:09.460 5.486 - 5.516: 99.7376% ( 2) 00:18:09.460 5.516 - 5.547: 99.7437% ( 1) 00:18:09.460 5.547 - 5.577: 99.7559% ( 2) 00:18:09.460 5.577 - 5.608: 99.7681% ( 2) 00:18:09.460 5.638 - 5.669: 99.7742% ( 1) 00:18:09.460 5.790 - 5.821: 99.7803% ( 1) 00:18:09.460 5.821 - 5.851: 99.7864% ( 1) 00:18:09.460 5.973 - 6.004: 99.7925% ( 1) 00:18:09.460 6.065 - 6.095: 99.7986% ( 1) 00:18:09.461 6.095 - 6.126: 99.8047% ( 1) 00:18:09.461 6.126 - 6.156: 99.8169% ( 2) 00:18:09.461 6.187 - 6.217: 99.8230% ( 1) 00:18:09.461 6.309 - 6.339: 99.8291% ( 1) 00:18:09.461 6.491 - 6.522: 99.8352% ( 1) 00:18:09.461 6.522 - 6.552: 99.8474% ( 2) 00:18:09.461 6.552 - 6.583: 99.8596% ( 2) 00:18:09.461 6.735 - 6.766: 99.8657% ( 1) 00:18:09.461 6.766 - 6.796: 99.8718% ( 1) 00:18:09.461 7.070 - 7.101: 99.8779% ( 1) 00:18:09.461 7.741 - 7.771: 99.8840% ( 1) 00:18:09.461 7.771 - 7.802: 99.8963% ( 2) 00:18:09.461 7.924 - 7.985: 99.9024% ( 1) 00:18:09.461 [2024-12-15 12:58:17.350822] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.720 8.168 - 8.229: 99.9085% ( 1) 00:18:09.720 8.655 - 8.716: 99.9146% ( 1) 00:18:09.720 3994.575 - 4025.783: 100.0000% ( 14) 00:18:09.720 00:18:09.720 Complete histogram 00:18:09.720 ================== 00:18:09.720 Range in us Cumulative Count 00:18:09.720 1.714 - 1.722: 0.0122% ( 2) 00:18:09.720 1.722 - 1.730: 0.0366% ( 4) 00:18:09.720 1.730 - 1.737: 0.0976% ( 10) 00:18:09.720 1.737 - 1.745: 0.1343% ( 6) 00:18:09.720 1.745 - 1.752: 0.1404% ( 1) 00:18:09.720 1.760 - 1.768: 0.4882% ( 57) 00:18:09.720 1.768 - 1.775: 3.8081% ( 544) 00:18:09.720 1.775 - 1.783: 15.0189% ( 1837) 00:18:09.720 1.783 - 1.790: 27.9568% ( 2120) 00:18:09.720 1.790 - 1.798: 34.4318% ( 1061) 00:18:09.720 1.798 - 1.806: 36.7448% ( 379) 00:18:09.720 1.806 - 1.813: 38.3254% ( 259) 00:18:09.720 1.813 - 1.821: 40.2173% ( 310) 00:18:09.720 1.821 - 1.829: 47.0585% ( 1121) 00:18:09.720 1.829 - 1.836: 60.9301% ( 2273) 00:18:09.720 1.836 - 1.844: 76.0588% ( 2479) 00:18:09.720 1.844 - 1.851: 87.3673% ( 1853) 00:18:09.720 1.851 - 1.859: 93.2076% ( 957) 00:18:09.720 1.859 - 1.867: 95.7525% ( 417) 00:18:09.720 1.867 - 1.874: 96.8754% ( 184) 00:18:09.720 1.874 - 1.882: 97.3331% ( 75) 00:18:09.720 1.882 - 1.890: 97.6260% ( 48) 00:18:09.720 1.890 - 1.897: 97.8518% ( 37) 00:18:09.720 1.897 - 1.905: 98.1203% ( 44) 00:18:09.720 1.905 - 1.912: 98.4316% ( 51) 00:18:09.721 1.912 - 1.920: 98.7611% ( 54) 00:18:09.721 1.920 - 1.928: 99.0419% ( 46) 00:18:09.721 1.928 - 1.935: 99.2188% ( 29) 00:18:09.721 1.935 - 1.943: 99.3165% ( 16) 00:18:09.721 1.943 - 1.950: 99.3470% ( 5) 00:18:09.721 1.950 - 1.966: 99.3775% ( 5) 00:18:09.721 1.966 - 1.981: 99.3897% ( 2) 00:18:09.721 1.981 - 1.996: 99.4019% ( 2) 00:18:09.721 2.027 - 2.042: 99.4080% ( 1) 00:18:09.721 2.057 - 2.072: 99.4141% ( 1) 00:18:09.721 2.286 - 2.301: 99.4202% ( 1) 00:18:09.721 2.331 - 2.347: 99.4263% ( 1) 00:18:09.721 3.398 - 3.413: 99.4324% ( 1) 00:18:09.721 3.566 - 3.581: 99.4385% ( 1) 00:18:09.721 3.703 - 3.718: 99.4446% ( 1) 00:18:09.721 4.053 - 4.084: 99.4508% ( 1) 00:18:09.721 4.114 - 4.145: 99.4569% ( 1) 00:18:09.721 4.328 - 4.358: 99.4630% ( 1) 00:18:09.721 4.571 - 4.602: 99.4691% ( 1) 00:18:09.721 4.602 - 4.632: 99.4752% ( 1) 00:18:09.721 4.815 - 4.846: 99.4813% ( 1) 00:18:09.721 5.029 - 5.059: 99.4874% ( 1) 00:18:09.721 5.181 - 5.211: 99.4935% ( 1) 00:18:09.721 5.272 - 5.303: 99.4996% ( 1) 00:18:09.721 5.425 - 5.455: 99.5057% ( 1) 00:18:09.721 7.619 - 7.650: 99.5118% ( 1) 00:18:09.721 17.676 - 17.798: 99.5179% ( 1) 00:18:09.721 161.890 - 162.865: 99.5240% ( 1) 00:18:09.721 3978.971 - 3994.575: 99.5301% ( 1) 00:18:09.721 3994.575 - 4025.783: 100.0000% ( 77) 00:18:09.721 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.721 [ 00:18:09.721 { 00:18:09.721 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.721 "subtype": "Discovery", 00:18:09.721 "listen_addresses": [], 00:18:09.721 "allow_any_host": true, 00:18:09.721 "hosts": [] 00:18:09.721 }, 00:18:09.721 { 00:18:09.721 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.721 "subtype": "NVMe", 00:18:09.721 "listen_addresses": [ 00:18:09.721 { 00:18:09.721 "trtype": "VFIOUSER", 00:18:09.721 "adrfam": "IPv4", 00:18:09.721 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.721 "trsvcid": "0" 00:18:09.721 } 00:18:09.721 ], 00:18:09.721 "allow_any_host": true, 00:18:09.721 "hosts": [], 00:18:09.721 "serial_number": "SPDK1", 00:18:09.721 "model_number": "SPDK bdev Controller", 00:18:09.721 "max_namespaces": 32, 00:18:09.721 "min_cntlid": 1, 00:18:09.721 "max_cntlid": 65519, 00:18:09.721 "namespaces": [ 00:18:09.721 { 00:18:09.721 "nsid": 1, 00:18:09.721 "bdev_name": "Malloc1", 00:18:09.721 "name": "Malloc1", 00:18:09.721 "nguid": "2945159E859F4917B360E3C1B9BE26C9", 00:18:09.721 "uuid": "2945159e-859f-4917-b360-e3c1b9be26c9" 00:18:09.721 } 00:18:09.721 ] 00:18:09.721 }, 00:18:09.721 { 00:18:09.721 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.721 "subtype": "NVMe", 00:18:09.721 "listen_addresses": [ 00:18:09.721 { 00:18:09.721 "trtype": "VFIOUSER", 00:18:09.721 "adrfam": "IPv4", 00:18:09.721 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.721 "trsvcid": "0" 00:18:09.721 } 00:18:09.721 ], 00:18:09.721 "allow_any_host": true, 00:18:09.721 "hosts": [], 00:18:09.721 "serial_number": "SPDK2", 00:18:09.721 "model_number": "SPDK bdev Controller", 00:18:09.721 "max_namespaces": 32, 00:18:09.721 "min_cntlid": 1, 00:18:09.721 "max_cntlid": 65519, 00:18:09.721 "namespaces": [ 00:18:09.721 { 00:18:09.721 "nsid": 1, 00:18:09.721 "bdev_name": "Malloc2", 00:18:09.721 "name": "Malloc2", 00:18:09.721 "nguid": "2CC9352C8AA84E2897257D702E176760", 00:18:09.721 "uuid": "2cc9352c-8aa8-4e28-9725-7d702e176760" 00:18:09.721 } 00:18:09.721 ] 00:18:09.721 } 00:18:09.721 ] 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=968135 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:09.721 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:09.981 [2024-12-15 12:58:17.774274] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.981 Malloc3 00:18:09.981 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:10.239 [2024-12-15 12:58:18.003004] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.239 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:10.239 Asynchronous Event Request test 00:18:10.239 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.239 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.239 Registering asynchronous event callbacks... 00:18:10.239 Starting namespace attribute notice tests for all controllers... 00:18:10.239 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:10.239 aer_cb - Changed Namespace 00:18:10.239 Cleaning up... 00:18:10.499 [ 00:18:10.499 { 00:18:10.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:10.499 "subtype": "Discovery", 00:18:10.499 "listen_addresses": [], 00:18:10.499 "allow_any_host": true, 00:18:10.499 "hosts": [] 00:18:10.499 }, 00:18:10.499 { 00:18:10.499 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:10.499 "subtype": "NVMe", 00:18:10.499 "listen_addresses": [ 00:18:10.499 { 00:18:10.499 "trtype": "VFIOUSER", 00:18:10.499 "adrfam": "IPv4", 00:18:10.499 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:10.499 "trsvcid": "0" 00:18:10.499 } 00:18:10.499 ], 00:18:10.499 "allow_any_host": true, 00:18:10.499 "hosts": [], 00:18:10.499 "serial_number": "SPDK1", 00:18:10.499 "model_number": "SPDK bdev Controller", 00:18:10.499 "max_namespaces": 32, 00:18:10.499 "min_cntlid": 1, 00:18:10.499 "max_cntlid": 65519, 00:18:10.499 "namespaces": [ 00:18:10.499 { 00:18:10.499 "nsid": 1, 00:18:10.499 "bdev_name": "Malloc1", 00:18:10.499 "name": "Malloc1", 00:18:10.499 "nguid": "2945159E859F4917B360E3C1B9BE26C9", 00:18:10.499 "uuid": "2945159e-859f-4917-b360-e3c1b9be26c9" 00:18:10.499 }, 00:18:10.499 { 00:18:10.499 "nsid": 2, 00:18:10.499 "bdev_name": "Malloc3", 00:18:10.499 "name": "Malloc3", 00:18:10.499 "nguid": "045D507FEE124760A8C4426ACFD710EB", 00:18:10.499 "uuid": "045d507f-ee12-4760-a8c4-426acfd710eb" 00:18:10.499 } 00:18:10.499 ] 00:18:10.499 }, 00:18:10.499 { 00:18:10.500 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:10.500 "subtype": "NVMe", 00:18:10.500 "listen_addresses": [ 00:18:10.500 { 00:18:10.500 "trtype": "VFIOUSER", 00:18:10.500 "adrfam": "IPv4", 00:18:10.500 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:10.500 "trsvcid": "0" 00:18:10.500 } 00:18:10.500 ], 00:18:10.500 "allow_any_host": true, 00:18:10.500 "hosts": [], 00:18:10.500 "serial_number": "SPDK2", 00:18:10.500 "model_number": "SPDK bdev Controller", 00:18:10.500 "max_namespaces": 32, 00:18:10.500 "min_cntlid": 1, 00:18:10.500 "max_cntlid": 65519, 00:18:10.500 "namespaces": [ 00:18:10.500 { 00:18:10.500 "nsid": 1, 00:18:10.500 "bdev_name": "Malloc2", 00:18:10.500 "name": "Malloc2", 00:18:10.500 "nguid": "2CC9352C8AA84E2897257D702E176760", 00:18:10.500 "uuid": "2cc9352c-8aa8-4e28-9725-7d702e176760" 00:18:10.500 } 00:18:10.500 ] 00:18:10.500 } 00:18:10.500 ] 00:18:10.500 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 968135 00:18:10.500 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:10.500 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:10.500 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:10.500 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:10.500 [2024-12-15 12:58:18.236594] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:10.500 [2024-12-15 12:58:18.236637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968182 ] 00:18:10.500 [2024-12-15 12:58:18.278160] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:10.500 [2024-12-15 12:58:18.283424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.500 [2024-12-15 12:58:18.283447] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f65c10f7000 00:18:10.500 [2024-12-15 12:58:18.284421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.285428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.286433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.287439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.288439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.289447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.290458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.291463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:10.500 [2024-12-15 12:58:18.292468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:10.500 [2024-12-15 12:58:18.292478] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f65bfe01000 00:18:10.500 [2024-12-15 12:58:18.293394] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.500 [2024-12-15 12:58:18.302742] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:10.500 [2024-12-15 12:58:18.302767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:10.500 [2024-12-15 12:58:18.307837] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.500 [2024-12-15 12:58:18.307875] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:10.500 [2024-12-15 12:58:18.307952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:10.500 [2024-12-15 12:58:18.307966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:10.500 [2024-12-15 12:58:18.307971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:10.500 [2024-12-15 12:58:18.308837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:10.500 [2024-12-15 12:58:18.308847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:10.500 [2024-12-15 12:58:18.308853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:10.500 [2024-12-15 12:58:18.309844] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:10.500 [2024-12-15 12:58:18.309852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:10.500 [2024-12-15 12:58:18.309859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.310857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:10.500 [2024-12-15 12:58:18.310868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.311856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:10.500 [2024-12-15 12:58:18.311865] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:10.500 [2024-12-15 12:58:18.311869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.311875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.311983] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:10.500 [2024-12-15 12:58:18.311987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.311992] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:10.500 [2024-12-15 12:58:18.312867] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:10.500 [2024-12-15 12:58:18.313874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:10.500 [2024-12-15 12:58:18.314886] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.500 [2024-12-15 12:58:18.315894] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:10.500 [2024-12-15 12:58:18.315932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:10.500 [2024-12-15 12:58:18.316902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:10.500 [2024-12-15 12:58:18.316910] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:10.500 [2024-12-15 12:58:18.316915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:10.500 [2024-12-15 12:58:18.316931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:10.500 [2024-12-15 12:58:18.316938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:10.500 [2024-12-15 12:58:18.316948] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.500 [2024-12-15 12:58:18.316953] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.500 [2024-12-15 12:58:18.316956] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.500 [2024-12-15 12:58:18.316968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.500 [2024-12-15 12:58:18.323832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:10.500 [2024-12-15 12:58:18.323843] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:10.500 [2024-12-15 12:58:18.323847] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:10.500 [2024-12-15 12:58:18.323853] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:10.500 [2024-12-15 12:58:18.323857] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:10.500 [2024-12-15 12:58:18.323862] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:10.500 [2024-12-15 12:58:18.323866] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:10.500 [2024-12-15 12:58:18.323870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:10.500 [2024-12-15 12:58:18.323879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:10.500 [2024-12-15 12:58:18.323890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:10.500 [2024-12-15 12:58:18.331829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:10.500 [2024-12-15 12:58:18.331842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.500 [2024-12-15 12:58:18.331849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.500 [2024-12-15 12:58:18.331856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.501 [2024-12-15 12:58:18.331863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.501 [2024-12-15 12:58:18.331867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.331875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.331884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.337830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.337838] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:10.501 [2024-12-15 12:58:18.337843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.337850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.337855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.337863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.347881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.347891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.347902] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:10.501 [2024-12-15 12:58:18.347907] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:10.501 [2024-12-15 12:58:18.347910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.347916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.355829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.355839] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:10.501 [2024-12-15 12:58:18.355850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.355857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.355863] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.501 [2024-12-15 12:58:18.355867] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.501 [2024-12-15 12:58:18.355870] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.355875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.363830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.363843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.363850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.363856] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:10.501 [2024-12-15 12:58:18.363860] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.501 [2024-12-15 12:58:18.363863] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.363868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.371829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.371837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371872] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:10.501 [2024-12-15 12:58:18.371876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:10.501 [2024-12-15 12:58:18.371880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:10.501 [2024-12-15 12:58:18.371897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.379831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.379843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.387830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.387843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.395829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.395841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.402830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:10.501 [2024-12-15 12:58:18.402846] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:10.501 [2024-12-15 12:58:18.402850] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:10.501 [2024-12-15 12:58:18.402853] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:10.501 [2024-12-15 12:58:18.402857] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:10.501 [2024-12-15 12:58:18.402859] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:10.501 [2024-12-15 12:58:18.402865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:10.501 [2024-12-15 12:58:18.402872] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:10.501 [2024-12-15 12:58:18.402876] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:10.501 [2024-12-15 12:58:18.402879] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.402884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.402890] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:10.501 [2024-12-15 12:58:18.402894] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:10.501 [2024-12-15 12:58:18.402896] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.402902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:10.501 [2024-12-15 12:58:18.402908] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:10.501 [2024-12-15 12:58:18.402912] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:10.501 [2024-12-15 12:58:18.402915] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:10.501 [2024-12-15 12:58:18.402922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:10.759 [2024-12-15 12:58:18.411830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:10.759 [2024-12-15 12:58:18.411843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:10.759 [2024-12-15 12:58:18.411852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:10.759 [2024-12-15 12:58:18.411858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:10.759 ===================================================== 00:18:10.759 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:10.759 ===================================================== 00:18:10.759 Controller Capabilities/Features 00:18:10.759 ================================ 00:18:10.759 Vendor ID: 4e58 00:18:10.759 Subsystem Vendor ID: 4e58 00:18:10.759 Serial Number: SPDK2 00:18:10.759 Model Number: SPDK bdev Controller 00:18:10.759 Firmware Version: 25.01 00:18:10.759 Recommended Arb Burst: 6 00:18:10.759 IEEE OUI Identifier: 8d 6b 50 00:18:10.759 Multi-path I/O 00:18:10.759 May have multiple subsystem ports: Yes 00:18:10.759 May have multiple controllers: Yes 00:18:10.759 Associated with SR-IOV VF: No 00:18:10.759 Max Data Transfer Size: 131072 00:18:10.759 Max Number of Namespaces: 32 00:18:10.759 Max Number of I/O Queues: 127 00:18:10.759 NVMe Specification Version (VS): 1.3 00:18:10.759 NVMe Specification Version (Identify): 1.3 00:18:10.759 Maximum Queue Entries: 256 00:18:10.759 Contiguous Queues Required: Yes 00:18:10.759 Arbitration Mechanisms Supported 00:18:10.759 Weighted Round Robin: Not Supported 00:18:10.759 Vendor Specific: Not Supported 00:18:10.759 Reset Timeout: 15000 ms 00:18:10.759 Doorbell Stride: 4 bytes 00:18:10.759 NVM Subsystem Reset: Not Supported 00:18:10.759 Command Sets Supported 00:18:10.759 NVM Command Set: Supported 00:18:10.759 Boot Partition: Not Supported 00:18:10.759 Memory Page Size Minimum: 4096 bytes 00:18:10.759 Memory Page Size Maximum: 4096 bytes 00:18:10.759 Persistent Memory Region: Not Supported 00:18:10.759 Optional Asynchronous Events Supported 00:18:10.759 Namespace Attribute Notices: Supported 00:18:10.759 Firmware Activation Notices: Not Supported 00:18:10.759 ANA Change Notices: Not Supported 00:18:10.759 PLE Aggregate Log Change Notices: Not Supported 00:18:10.759 LBA Status Info Alert Notices: Not Supported 00:18:10.759 EGE Aggregate Log Change Notices: Not Supported 00:18:10.759 Normal NVM Subsystem Shutdown event: Not Supported 00:18:10.759 Zone Descriptor Change Notices: Not Supported 00:18:10.759 Discovery Log Change Notices: Not Supported 00:18:10.759 Controller Attributes 00:18:10.759 128-bit Host Identifier: Supported 00:18:10.759 Non-Operational Permissive Mode: Not Supported 00:18:10.759 NVM Sets: Not Supported 00:18:10.759 Read Recovery Levels: Not Supported 00:18:10.759 Endurance Groups: Not Supported 00:18:10.759 Predictable Latency Mode: Not Supported 00:18:10.759 Traffic Based Keep ALive: Not Supported 00:18:10.759 Namespace Granularity: Not Supported 00:18:10.759 SQ Associations: Not Supported 00:18:10.759 UUID List: Not Supported 00:18:10.759 Multi-Domain Subsystem: Not Supported 00:18:10.759 Fixed Capacity Management: Not Supported 00:18:10.759 Variable Capacity Management: Not Supported 00:18:10.759 Delete Endurance Group: Not Supported 00:18:10.759 Delete NVM Set: Not Supported 00:18:10.759 Extended LBA Formats Supported: Not Supported 00:18:10.759 Flexible Data Placement Supported: Not Supported 00:18:10.759 00:18:10.759 Controller Memory Buffer Support 00:18:10.759 ================================ 00:18:10.759 Supported: No 00:18:10.759 00:18:10.759 Persistent Memory Region Support 00:18:10.759 ================================ 00:18:10.759 Supported: No 00:18:10.759 00:18:10.759 Admin Command Set Attributes 00:18:10.759 ============================ 00:18:10.759 Security Send/Receive: Not Supported 00:18:10.759 Format NVM: Not Supported 00:18:10.759 Firmware Activate/Download: Not Supported 00:18:10.759 Namespace Management: Not Supported 00:18:10.759 Device Self-Test: Not Supported 00:18:10.759 Directives: Not Supported 00:18:10.759 NVMe-MI: Not Supported 00:18:10.759 Virtualization Management: Not Supported 00:18:10.759 Doorbell Buffer Config: Not Supported 00:18:10.759 Get LBA Status Capability: Not Supported 00:18:10.759 Command & Feature Lockdown Capability: Not Supported 00:18:10.759 Abort Command Limit: 4 00:18:10.759 Async Event Request Limit: 4 00:18:10.759 Number of Firmware Slots: N/A 00:18:10.759 Firmware Slot 1 Read-Only: N/A 00:18:10.759 Firmware Activation Without Reset: N/A 00:18:10.759 Multiple Update Detection Support: N/A 00:18:10.759 Firmware Update Granularity: No Information Provided 00:18:10.759 Per-Namespace SMART Log: No 00:18:10.759 Asymmetric Namespace Access Log Page: Not Supported 00:18:10.759 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:10.759 Command Effects Log Page: Supported 00:18:10.759 Get Log Page Extended Data: Supported 00:18:10.759 Telemetry Log Pages: Not Supported 00:18:10.759 Persistent Event Log Pages: Not Supported 00:18:10.759 Supported Log Pages Log Page: May Support 00:18:10.759 Commands Supported & Effects Log Page: Not Supported 00:18:10.759 Feature Identifiers & Effects Log Page:May Support 00:18:10.759 NVMe-MI Commands & Effects Log Page: May Support 00:18:10.759 Data Area 4 for Telemetry Log: Not Supported 00:18:10.759 Error Log Page Entries Supported: 128 00:18:10.759 Keep Alive: Supported 00:18:10.759 Keep Alive Granularity: 10000 ms 00:18:10.759 00:18:10.759 NVM Command Set Attributes 00:18:10.759 ========================== 00:18:10.759 Submission Queue Entry Size 00:18:10.759 Max: 64 00:18:10.759 Min: 64 00:18:10.759 Completion Queue Entry Size 00:18:10.760 Max: 16 00:18:10.760 Min: 16 00:18:10.760 Number of Namespaces: 32 00:18:10.760 Compare Command: Supported 00:18:10.760 Write Uncorrectable Command: Not Supported 00:18:10.760 Dataset Management Command: Supported 00:18:10.760 Write Zeroes Command: Supported 00:18:10.760 Set Features Save Field: Not Supported 00:18:10.760 Reservations: Not Supported 00:18:10.760 Timestamp: Not Supported 00:18:10.760 Copy: Supported 00:18:10.760 Volatile Write Cache: Present 00:18:10.760 Atomic Write Unit (Normal): 1 00:18:10.760 Atomic Write Unit (PFail): 1 00:18:10.760 Atomic Compare & Write Unit: 1 00:18:10.760 Fused Compare & Write: Supported 00:18:10.760 Scatter-Gather List 00:18:10.760 SGL Command Set: Supported (Dword aligned) 00:18:10.760 SGL Keyed: Not Supported 00:18:10.760 SGL Bit Bucket Descriptor: Not Supported 00:18:10.760 SGL Metadata Pointer: Not Supported 00:18:10.760 Oversized SGL: Not Supported 00:18:10.760 SGL Metadata Address: Not Supported 00:18:10.760 SGL Offset: Not Supported 00:18:10.760 Transport SGL Data Block: Not Supported 00:18:10.760 Replay Protected Memory Block: Not Supported 00:18:10.760 00:18:10.760 Firmware Slot Information 00:18:10.760 ========================= 00:18:10.760 Active slot: 1 00:18:10.760 Slot 1 Firmware Revision: 25.01 00:18:10.760 00:18:10.760 00:18:10.760 Commands Supported and Effects 00:18:10.760 ============================== 00:18:10.760 Admin Commands 00:18:10.760 -------------- 00:18:10.760 Get Log Page (02h): Supported 00:18:10.760 Identify (06h): Supported 00:18:10.760 Abort (08h): Supported 00:18:10.760 Set Features (09h): Supported 00:18:10.760 Get Features (0Ah): Supported 00:18:10.760 Asynchronous Event Request (0Ch): Supported 00:18:10.760 Keep Alive (18h): Supported 00:18:10.760 I/O Commands 00:18:10.760 ------------ 00:18:10.760 Flush (00h): Supported LBA-Change 00:18:10.760 Write (01h): Supported LBA-Change 00:18:10.760 Read (02h): Supported 00:18:10.760 Compare (05h): Supported 00:18:10.760 Write Zeroes (08h): Supported LBA-Change 00:18:10.760 Dataset Management (09h): Supported LBA-Change 00:18:10.760 Copy (19h): Supported LBA-Change 00:18:10.760 00:18:10.760 Error Log 00:18:10.760 ========= 00:18:10.760 00:18:10.760 Arbitration 00:18:10.760 =========== 00:18:10.760 Arbitration Burst: 1 00:18:10.760 00:18:10.760 Power Management 00:18:10.760 ================ 00:18:10.760 Number of Power States: 1 00:18:10.760 Current Power State: Power State #0 00:18:10.760 Power State #0: 00:18:10.760 Max Power: 0.00 W 00:18:10.760 Non-Operational State: Operational 00:18:10.760 Entry Latency: Not Reported 00:18:10.760 Exit Latency: Not Reported 00:18:10.760 Relative Read Throughput: 0 00:18:10.760 Relative Read Latency: 0 00:18:10.760 Relative Write Throughput: 0 00:18:10.760 Relative Write Latency: 0 00:18:10.760 Idle Power: Not Reported 00:18:10.760 Active Power: Not Reported 00:18:10.760 Non-Operational Permissive Mode: Not Supported 00:18:10.760 00:18:10.760 Health Information 00:18:10.760 ================== 00:18:10.760 Critical Warnings: 00:18:10.760 Available Spare Space: OK 00:18:10.760 Temperature: OK 00:18:10.760 Device Reliability: OK 00:18:10.760 Read Only: No 00:18:10.760 Volatile Memory Backup: OK 00:18:10.760 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:10.760 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:10.760 Available Spare: 0% 00:18:10.760 Available Sp[2024-12-15 12:58:18.411945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:10.760 [2024-12-15 12:58:18.419829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:10.760 [2024-12-15 12:58:18.419857] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:10.760 [2024-12-15 12:58:18.419866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.760 [2024-12-15 12:58:18.419872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.760 [2024-12-15 12:58:18.419877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.760 [2024-12-15 12:58:18.419882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.760 [2024-12-15 12:58:18.419921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:10.760 [2024-12-15 12:58:18.419931] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:10.760 [2024-12-15 12:58:18.420927] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:10.760 [2024-12-15 12:58:18.420970] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:10.760 [2024-12-15 12:58:18.420977] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:10.760 [2024-12-15 12:58:18.421936] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:10.760 [2024-12-15 12:58:18.421947] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:10.760 [2024-12-15 12:58:18.421998] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:10.760 [2024-12-15 12:58:18.424829] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:10.760 are Threshold: 0% 00:18:10.760 Life Percentage Used: 0% 00:18:10.760 Data Units Read: 0 00:18:10.760 Data Units Written: 0 00:18:10.760 Host Read Commands: 0 00:18:10.760 Host Write Commands: 0 00:18:10.760 Controller Busy Time: 0 minutes 00:18:10.760 Power Cycles: 0 00:18:10.760 Power On Hours: 0 hours 00:18:10.760 Unsafe Shutdowns: 0 00:18:10.760 Unrecoverable Media Errors: 0 00:18:10.760 Lifetime Error Log Entries: 0 00:18:10.760 Warning Temperature Time: 0 minutes 00:18:10.760 Critical Temperature Time: 0 minutes 00:18:10.760 00:18:10.760 Number of Queues 00:18:10.760 ================ 00:18:10.760 Number of I/O Submission Queues: 127 00:18:10.760 Number of I/O Completion Queues: 127 00:18:10.760 00:18:10.760 Active Namespaces 00:18:10.760 ================= 00:18:10.760 Namespace ID:1 00:18:10.760 Error Recovery Timeout: Unlimited 00:18:10.760 Command Set Identifier: NVM (00h) 00:18:10.760 Deallocate: Supported 00:18:10.760 Deallocated/Unwritten Error: Not Supported 00:18:10.760 Deallocated Read Value: Unknown 00:18:10.760 Deallocate in Write Zeroes: Not Supported 00:18:10.760 Deallocated Guard Field: 0xFFFF 00:18:10.760 Flush: Supported 00:18:10.760 Reservation: Supported 00:18:10.760 Namespace Sharing Capabilities: Multiple Controllers 00:18:10.760 Size (in LBAs): 131072 (0GiB) 00:18:10.760 Capacity (in LBAs): 131072 (0GiB) 00:18:10.760 Utilization (in LBAs): 131072 (0GiB) 00:18:10.760 NGUID: 2CC9352C8AA84E2897257D702E176760 00:18:10.760 UUID: 2cc9352c-8aa8-4e28-9725-7d702e176760 00:18:10.760 Thin Provisioning: Not Supported 00:18:10.760 Per-NS Atomic Units: Yes 00:18:10.760 Atomic Boundary Size (Normal): 0 00:18:10.760 Atomic Boundary Size (PFail): 0 00:18:10.760 Atomic Boundary Offset: 0 00:18:10.760 Maximum Single Source Range Length: 65535 00:18:10.760 Maximum Copy Length: 65535 00:18:10.760 Maximum Source Range Count: 1 00:18:10.760 NGUID/EUI64 Never Reused: No 00:18:10.760 Namespace Write Protected: No 00:18:10.760 Number of LBA Formats: 1 00:18:10.760 Current LBA Format: LBA Format #00 00:18:10.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:10.760 00:18:10.760 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.760 [2024-12-15 12:58:18.653681] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:16.155 Initializing NVMe Controllers 00:18:16.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:16.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:16.155 Initialization complete. Launching workers. 00:18:16.155 ======================================================== 00:18:16.155 Latency(us) 00:18:16.155 Device Information : IOPS MiB/s Average min max 00:18:16.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39880.69 155.78 3209.44 971.04 9609.12 00:18:16.155 ======================================================== 00:18:16.155 Total : 39880.69 155.78 3209.44 971.04 9609.12 00:18:16.155 00:18:16.155 [2024-12-15 12:58:23.756089] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:16.155 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:16.155 [2024-12-15 12:58:23.994844] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:21.425 Initializing NVMe Controllers 00:18:21.425 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.425 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:21.425 Initialization complete. Launching workers. 00:18:21.425 ======================================================== 00:18:21.425 Latency(us) 00:18:21.425 Device Information : IOPS MiB/s Average min max 00:18:21.425 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.88 156.08 3203.29 964.71 7591.12 00:18:21.425 ======================================================== 00:18:21.425 Total : 39956.88 156.08 3203.29 964.71 7591.12 00:18:21.425 00:18:21.425 [2024-12-15 12:58:29.015057] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.425 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:21.425 [2024-12-15 12:58:29.226320] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:26.695 [2024-12-15 12:58:34.361921] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:26.695 Initializing NVMe Controllers 00:18:26.695 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.695 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:26.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:26.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:26.695 Initialization complete. Launching workers. 00:18:26.695 Starting thread on core 2 00:18:26.695 Starting thread on core 3 00:18:26.695 Starting thread on core 1 00:18:26.695 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:26.954 [2024-12-15 12:58:34.656839] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.242 [2024-12-15 12:58:37.727302] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.242 Initializing NVMe Controllers 00:18:30.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:30.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:30.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:30.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:30.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:30.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:30.242 Initialization complete. Launching workers. 00:18:30.242 Starting thread on core 1 with urgent priority queue 00:18:30.242 Starting thread on core 2 with urgent priority queue 00:18:30.242 Starting thread on core 3 with urgent priority queue 00:18:30.242 Starting thread on core 0 with urgent priority queue 00:18:30.242 SPDK bdev Controller (SPDK2 ) core 0: 9442.67 IO/s 10.59 secs/100000 ios 00:18:30.242 SPDK bdev Controller (SPDK2 ) core 1: 7222.67 IO/s 13.85 secs/100000 ios 00:18:30.242 SPDK bdev Controller (SPDK2 ) core 2: 7425.00 IO/s 13.47 secs/100000 ios 00:18:30.242 SPDK bdev Controller (SPDK2 ) core 3: 8623.33 IO/s 11.60 secs/100000 ios 00:18:30.242 ======================================================== 00:18:30.242 00:18:30.242 12:58:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:30.242 [2024-12-15 12:58:38.022312] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.242 Initializing NVMe Controllers 00:18:30.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.242 Namespace ID: 1 size: 0GB 00:18:30.242 Initialization complete. 00:18:30.242 INFO: using host memory buffer for IO 00:18:30.242 Hello world! 00:18:30.242 [2024-12-15 12:58:38.032378] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.242 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:30.501 [2024-12-15 12:58:38.307947] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.878 Initializing NVMe Controllers 00:18:31.878 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.878 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.878 Initialization complete. Launching workers. 00:18:31.878 submit (in ns) avg, min, max = 6086.3, 3183.8, 3998162.9 00:18:31.878 complete (in ns) avg, min, max = 22228.4, 1769.5, 4003861.0 00:18:31.878 00:18:31.878 Submit histogram 00:18:31.878 ================ 00:18:31.878 Range in us Cumulative Count 00:18:31.878 3.170 - 3.185: 0.0062% ( 1) 00:18:31.878 3.185 - 3.200: 0.3648% ( 58) 00:18:31.878 3.200 - 3.215: 2.4175% ( 332) 00:18:31.878 3.215 - 3.230: 7.3884% ( 804) 00:18:31.878 3.230 - 3.246: 12.8663% ( 886) 00:18:31.878 3.246 - 3.261: 18.7461% ( 951) 00:18:31.878 3.261 - 3.276: 26.1840% ( 1203) 00:18:31.878 3.276 - 3.291: 33.0098% ( 1104) 00:18:31.878 3.291 - 3.307: 39.0503% ( 977) 00:18:31.878 3.307 - 3.322: 44.1635% ( 827) 00:18:31.878 3.322 - 3.337: 49.1530% ( 807) 00:18:31.878 3.337 - 3.352: 52.9492% ( 614) 00:18:31.878 3.352 - 3.368: 58.0994% ( 833) 00:18:31.878 3.368 - 3.383: 65.0117% ( 1118) 00:18:31.878 3.383 - 3.398: 69.4572% ( 719) 00:18:31.878 3.398 - 3.413: 75.0031% ( 897) 00:18:31.878 3.413 - 3.429: 80.4130% ( 875) 00:18:31.878 3.429 - 3.444: 83.8754% ( 560) 00:18:31.878 3.444 - 3.459: 85.8353% ( 317) 00:18:31.878 3.459 - 3.474: 87.0038% ( 189) 00:18:31.878 3.474 - 3.490: 87.6592% ( 106) 00:18:31.878 3.490 - 3.505: 88.2528% ( 96) 00:18:31.878 3.505 - 3.520: 88.9329% ( 110) 00:18:31.878 3.520 - 3.535: 89.6933% ( 123) 00:18:31.878 3.535 - 3.550: 90.7382% ( 169) 00:18:31.878 3.550 - 3.566: 91.6656% ( 150) 00:18:31.878 3.566 - 3.581: 92.5807% ( 148) 00:18:31.878 3.581 - 3.596: 93.3597% ( 126) 00:18:31.878 3.596 - 3.611: 94.1078% ( 121) 00:18:31.878 3.611 - 3.627: 94.8745% ( 124) 00:18:31.878 3.627 - 3.642: 95.7401% ( 140) 00:18:31.878 3.642 - 3.657: 96.4882% ( 121) 00:18:31.878 3.657 - 3.672: 97.2796% ( 128) 00:18:31.878 3.672 - 3.688: 97.9226% ( 104) 00:18:31.878 3.688 - 3.703: 98.3368% ( 67) 00:18:31.878 3.703 - 3.718: 98.7449% ( 66) 00:18:31.878 3.718 - 3.733: 98.9922% ( 40) 00:18:31.879 3.733 - 3.749: 99.1962% ( 33) 00:18:31.879 3.749 - 3.764: 99.3694% ( 28) 00:18:31.879 3.764 - 3.779: 99.4930% ( 20) 00:18:31.879 3.779 - 3.794: 99.5610% ( 11) 00:18:31.879 3.794 - 3.810: 99.5796% ( 3) 00:18:31.879 3.810 - 3.825: 99.5919% ( 2) 00:18:31.879 3.825 - 3.840: 99.6105% ( 3) 00:18:31.879 3.855 - 3.870: 99.6167% ( 1) 00:18:31.879 4.023 - 4.053: 99.6229% ( 1) 00:18:31.879 4.053 - 4.084: 99.6290% ( 1) 00:18:31.879 4.846 - 4.876: 99.6352% ( 1) 00:18:31.879 4.937 - 4.968: 99.6414% ( 1) 00:18:31.879 5.029 - 5.059: 99.6476% ( 1) 00:18:31.879 5.059 - 5.090: 99.6599% ( 2) 00:18:31.879 5.150 - 5.181: 99.6847% ( 4) 00:18:31.879 5.242 - 5.272: 99.6970% ( 2) 00:18:31.879 5.272 - 5.303: 99.7094% ( 2) 00:18:31.879 5.333 - 5.364: 99.7156% ( 1) 00:18:31.879 5.364 - 5.394: 99.7218% ( 1) 00:18:31.879 5.394 - 5.425: 99.7280% ( 1) 00:18:31.879 5.425 - 5.455: 99.7403% ( 2) 00:18:31.879 5.455 - 5.486: 99.7465% ( 1) 00:18:31.879 5.516 - 5.547: 99.7527% ( 1) 00:18:31.879 5.577 - 5.608: 99.7589% ( 1) 00:18:31.879 5.638 - 5.669: 99.7651% ( 1) 00:18:31.879 5.730 - 5.760: 99.7712% ( 1) 00:18:31.879 5.821 - 5.851: 99.7774% ( 1) 00:18:31.879 5.882 - 5.912: 99.7960% ( 3) 00:18:31.879 5.912 - 5.943: 99.8022% ( 1) 00:18:31.879 6.004 - 6.034: 99.8083% ( 1) 00:18:31.879 6.065 - 6.095: 99.8145% ( 1) 00:18:31.879 6.095 - 6.126: 99.8207% ( 1) 00:18:31.879 6.156 - 6.187: 99.8269% ( 1) 00:18:31.879 6.248 - 6.278: 99.8331% ( 1) 00:18:31.879 6.309 - 6.339: 99.8392% ( 1) 00:18:31.879 6.339 - 6.370: 99.8454% ( 1) 00:18:31.879 6.400 - 6.430: 99.8516% ( 1) 00:18:31.879 6.674 - 6.705: 99.8640% ( 2) 00:18:31.879 6.857 - 6.888: 99.8702% ( 1) 00:18:31.879 6.979 - 7.010: 99.8763% ( 1) 00:18:31.879 7.131 - 7.162: 99.8887% ( 2) 00:18:31.879 7.589 - 7.619: 99.8949% ( 1) 00:18:31.879 [2024-12-15 12:58:39.410811] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.879 7.771 - 7.802: 99.9011% ( 1) 00:18:31.879 7.985 - 8.046: 99.9073% ( 1) 00:18:31.879 8.168 - 8.229: 99.9134% ( 1) 00:18:31.879 8.411 - 8.472: 99.9196% ( 1) 00:18:31.879 8.594 - 8.655: 99.9258% ( 1) 00:18:31.879 35.109 - 35.352: 99.9320% ( 1) 00:18:31.879 3994.575 - 4025.783: 100.0000% ( 11) 00:18:31.879 00:18:31.879 Complete histogram 00:18:31.879 ================== 00:18:31.879 Range in us Cumulative Count 00:18:31.879 1.768 - 1.775: 0.1298% ( 21) 00:18:31.879 1.775 - 1.783: 2.6091% ( 401) 00:18:31.879 1.783 - 1.790: 13.8618% ( 1820) 00:18:31.879 1.790 - 1.798: 34.4380% ( 3328) 00:18:31.879 1.798 - 1.806: 52.2258% ( 2877) 00:18:31.879 1.806 - 1.813: 60.8384% ( 1393) 00:18:31.879 1.813 - 1.821: 64.4800% ( 589) 00:18:31.879 1.821 - 1.829: 67.2870% ( 454) 00:18:31.879 1.829 - 1.836: 72.0972% ( 778) 00:18:31.879 1.836 - 1.844: 80.4563% ( 1352) 00:18:31.879 1.844 - 1.851: 88.8772% ( 1362) 00:18:31.879 1.851 - 1.859: 93.7121% ( 782) 00:18:31.879 1.859 - 1.867: 95.9688% ( 365) 00:18:31.879 1.867 - 1.874: 97.3167% ( 218) 00:18:31.879 1.874 - 1.882: 98.0153% ( 113) 00:18:31.879 1.882 - 1.890: 98.4110% ( 64) 00:18:31.879 1.890 - 1.897: 98.5903% ( 29) 00:18:31.879 1.897 - 1.905: 98.7758% ( 30) 00:18:31.879 1.905 - 1.912: 98.9613% ( 30) 00:18:31.879 1.912 - 1.920: 99.0602% ( 16) 00:18:31.879 1.920 - 1.928: 99.2024% ( 23) 00:18:31.879 1.928 - 1.935: 99.2333% ( 5) 00:18:31.879 1.935 - 1.943: 99.2643% ( 5) 00:18:31.879 1.943 - 1.950: 99.2828% ( 3) 00:18:31.879 1.966 - 1.981: 99.2890% ( 1) 00:18:31.879 1.996 - 2.011: 99.2952% ( 1) 00:18:31.879 2.011 - 2.027: 99.3013% ( 1) 00:18:31.879 2.149 - 2.164: 99.3075% ( 1) 00:18:31.879 2.164 - 2.179: 99.3137% ( 1) 00:18:31.879 2.179 - 2.194: 99.3199% ( 1) 00:18:31.879 2.210 - 2.225: 99.3261% ( 1) 00:18:31.879 2.301 - 2.316: 99.3323% ( 1) 00:18:31.879 3.215 - 3.230: 99.3384% ( 1) 00:18:31.879 3.459 - 3.474: 99.3446% ( 1) 00:18:31.879 3.566 - 3.581: 99.3508% ( 1) 00:18:31.879 3.794 - 3.810: 99.3570% ( 1) 00:18:31.879 3.840 - 3.855: 99.3632% ( 1) 00:18:31.879 3.855 - 3.870: 99.3694% ( 1) 00:18:31.879 3.886 - 3.901: 99.3755% ( 1) 00:18:31.879 4.023 - 4.053: 99.3817% ( 1) 00:18:31.879 4.084 - 4.114: 99.3879% ( 1) 00:18:31.879 4.114 - 4.145: 99.3941% ( 1) 00:18:31.879 4.145 - 4.175: 99.4003% ( 1) 00:18:31.879 4.236 - 4.267: 99.4065% ( 1) 00:18:31.879 4.389 - 4.419: 99.4126% ( 1) 00:18:31.879 4.450 - 4.480: 99.4188% ( 1) 00:18:31.879 4.571 - 4.602: 99.4250% ( 1) 00:18:31.879 4.846 - 4.876: 99.4312% ( 1) 00:18:31.879 4.907 - 4.937: 99.4374% ( 1) 00:18:31.879 5.059 - 5.090: 99.4436% ( 1) 00:18:31.879 5.333 - 5.364: 99.4559% ( 2) 00:18:31.879 5.516 - 5.547: 99.4621% ( 1) 00:18:31.879 6.187 - 6.217: 99.4745% ( 2) 00:18:31.879 11.032 - 11.093: 99.4806% ( 1) 00:18:31.879 52.175 - 52.419: 99.4868% ( 1) 00:18:31.879 2543.421 - 2559.025: 99.4930% ( 1) 00:18:31.879 3978.971 - 3994.575: 99.5116% ( 3) 00:18:31.879 3994.575 - 4025.783: 100.0000% ( 79) 00:18:31.879 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.879 [ 00:18:31.879 { 00:18:31.879 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.879 "subtype": "Discovery", 00:18:31.879 "listen_addresses": [], 00:18:31.879 "allow_any_host": true, 00:18:31.879 "hosts": [] 00:18:31.879 }, 00:18:31.879 { 00:18:31.879 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.879 "subtype": "NVMe", 00:18:31.879 "listen_addresses": [ 00:18:31.879 { 00:18:31.879 "trtype": "VFIOUSER", 00:18:31.879 "adrfam": "IPv4", 00:18:31.879 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.879 "trsvcid": "0" 00:18:31.879 } 00:18:31.879 ], 00:18:31.879 "allow_any_host": true, 00:18:31.879 "hosts": [], 00:18:31.879 "serial_number": "SPDK1", 00:18:31.879 "model_number": "SPDK bdev Controller", 00:18:31.879 "max_namespaces": 32, 00:18:31.879 "min_cntlid": 1, 00:18:31.879 "max_cntlid": 65519, 00:18:31.879 "namespaces": [ 00:18:31.879 { 00:18:31.879 "nsid": 1, 00:18:31.879 "bdev_name": "Malloc1", 00:18:31.879 "name": "Malloc1", 00:18:31.879 "nguid": "2945159E859F4917B360E3C1B9BE26C9", 00:18:31.879 "uuid": "2945159e-859f-4917-b360-e3c1b9be26c9" 00:18:31.879 }, 00:18:31.879 { 00:18:31.879 "nsid": 2, 00:18:31.879 "bdev_name": "Malloc3", 00:18:31.879 "name": "Malloc3", 00:18:31.879 "nguid": "045D507FEE124760A8C4426ACFD710EB", 00:18:31.879 "uuid": "045d507f-ee12-4760-a8c4-426acfd710eb" 00:18:31.879 } 00:18:31.879 ] 00:18:31.879 }, 00:18:31.879 { 00:18:31.879 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.879 "subtype": "NVMe", 00:18:31.879 "listen_addresses": [ 00:18:31.879 { 00:18:31.879 "trtype": "VFIOUSER", 00:18:31.879 "adrfam": "IPv4", 00:18:31.879 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.879 "trsvcid": "0" 00:18:31.879 } 00:18:31.879 ], 00:18:31.879 "allow_any_host": true, 00:18:31.879 "hosts": [], 00:18:31.879 "serial_number": "SPDK2", 00:18:31.879 "model_number": "SPDK bdev Controller", 00:18:31.879 "max_namespaces": 32, 00:18:31.879 "min_cntlid": 1, 00:18:31.879 "max_cntlid": 65519, 00:18:31.879 "namespaces": [ 00:18:31.879 { 00:18:31.879 "nsid": 1, 00:18:31.879 "bdev_name": "Malloc2", 00:18:31.879 "name": "Malloc2", 00:18:31.879 "nguid": "2CC9352C8AA84E2897257D702E176760", 00:18:31.879 "uuid": "2cc9352c-8aa8-4e28-9725-7d702e176760" 00:18:31.879 } 00:18:31.879 ] 00:18:31.879 } 00:18:31.879 ] 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=971634 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:31.879 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:31.880 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:32.139 [2024-12-15 12:58:39.806789] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.139 Malloc4 00:18:32.139 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:32.398 [2024-12-15 12:58:40.073927] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:32.398 Asynchronous Event Request test 00:18:32.398 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.398 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:32.398 Registering asynchronous event callbacks... 00:18:32.398 Starting namespace attribute notice tests for all controllers... 00:18:32.398 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:32.398 aer_cb - Changed Namespace 00:18:32.398 Cleaning up... 00:18:32.398 [ 00:18:32.398 { 00:18:32.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:32.398 "subtype": "Discovery", 00:18:32.398 "listen_addresses": [], 00:18:32.398 "allow_any_host": true, 00:18:32.398 "hosts": [] 00:18:32.398 }, 00:18:32.398 { 00:18:32.398 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:32.398 "subtype": "NVMe", 00:18:32.398 "listen_addresses": [ 00:18:32.398 { 00:18:32.398 "trtype": "VFIOUSER", 00:18:32.398 "adrfam": "IPv4", 00:18:32.398 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:32.398 "trsvcid": "0" 00:18:32.398 } 00:18:32.398 ], 00:18:32.398 "allow_any_host": true, 00:18:32.398 "hosts": [], 00:18:32.398 "serial_number": "SPDK1", 00:18:32.398 "model_number": "SPDK bdev Controller", 00:18:32.398 "max_namespaces": 32, 00:18:32.398 "min_cntlid": 1, 00:18:32.398 "max_cntlid": 65519, 00:18:32.398 "namespaces": [ 00:18:32.398 { 00:18:32.398 "nsid": 1, 00:18:32.398 "bdev_name": "Malloc1", 00:18:32.398 "name": "Malloc1", 00:18:32.398 "nguid": "2945159E859F4917B360E3C1B9BE26C9", 00:18:32.398 "uuid": "2945159e-859f-4917-b360-e3c1b9be26c9" 00:18:32.398 }, 00:18:32.398 { 00:18:32.398 "nsid": 2, 00:18:32.398 "bdev_name": "Malloc3", 00:18:32.398 "name": "Malloc3", 00:18:32.398 "nguid": "045D507FEE124760A8C4426ACFD710EB", 00:18:32.398 "uuid": "045d507f-ee12-4760-a8c4-426acfd710eb" 00:18:32.398 } 00:18:32.398 ] 00:18:32.398 }, 00:18:32.398 { 00:18:32.398 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:32.398 "subtype": "NVMe", 00:18:32.398 "listen_addresses": [ 00:18:32.398 { 00:18:32.398 "trtype": "VFIOUSER", 00:18:32.398 "adrfam": "IPv4", 00:18:32.398 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:32.398 "trsvcid": "0" 00:18:32.398 } 00:18:32.398 ], 00:18:32.398 "allow_any_host": true, 00:18:32.398 "hosts": [], 00:18:32.398 "serial_number": "SPDK2", 00:18:32.398 "model_number": "SPDK bdev Controller", 00:18:32.398 "max_namespaces": 32, 00:18:32.398 "min_cntlid": 1, 00:18:32.398 "max_cntlid": 65519, 00:18:32.398 "namespaces": [ 00:18:32.398 { 00:18:32.398 "nsid": 1, 00:18:32.398 "bdev_name": "Malloc2", 00:18:32.398 "name": "Malloc2", 00:18:32.398 "nguid": "2CC9352C8AA84E2897257D702E176760", 00:18:32.398 "uuid": "2cc9352c-8aa8-4e28-9725-7d702e176760" 00:18:32.398 }, 00:18:32.398 { 00:18:32.398 "nsid": 2, 00:18:32.398 "bdev_name": "Malloc4", 00:18:32.398 "name": "Malloc4", 00:18:32.398 "nguid": "10997C00C7D348DCBF3B0087B275F6FC", 00:18:32.398 "uuid": "10997c00-c7d3-48dc-bf3b-0087b275f6fc" 00:18:32.398 } 00:18:32.398 ] 00:18:32.398 } 00:18:32.398 ] 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 971634 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 964132 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 964132 ']' 00:18:32.398 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 964132 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 964132 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 964132' 00:18:32.657 killing process with pid 964132 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 964132 00:18:32.657 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 964132 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=971770 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 971770' 00:18:32.916 Process pid: 971770 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 971770 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 971770 ']' 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.916 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.917 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.917 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.917 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:32.917 [2024-12-15 12:58:40.639199] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:32.917 [2024-12-15 12:58:40.640067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:32.917 [2024-12-15 12:58:40.640110] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.917 [2024-12-15 12:58:40.714090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.917 [2024-12-15 12:58:40.734004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.917 [2024-12-15 12:58:40.734045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.917 [2024-12-15 12:58:40.734054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.917 [2024-12-15 12:58:40.734060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.917 [2024-12-15 12:58:40.734065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.917 [2024-12-15 12:58:40.735600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.917 [2024-12-15 12:58:40.735710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.917 [2024-12-15 12:58:40.735823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.917 [2024-12-15 12:58:40.735836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.917 [2024-12-15 12:58:40.799118] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:32.917 [2024-12-15 12:58:40.799740] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:32.917 [2024-12-15 12:58:40.800150] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:32.917 [2024-12-15 12:58:40.800500] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:32.917 [2024-12-15 12:58:40.800544] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:33.176 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.176 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:33.176 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:34.113 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:34.372 Malloc1 00:18:34.372 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:34.631 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:34.890 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:35.149 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.149 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:35.149 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:35.407 Malloc2 00:18:35.407 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:35.407 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:35.666 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 971770 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 971770 ']' 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 971770 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971770 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971770' 00:18:35.925 killing process with pid 971770 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 971770 00:18:35.925 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 971770 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:36.184 00:18:36.184 real 0m50.790s 00:18:36.184 user 3m16.643s 00:18:36.184 sys 0m3.233s 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:36.184 ************************************ 00:18:36.184 END TEST nvmf_vfio_user 00:18:36.184 ************************************ 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.184 ************************************ 00:18:36.184 START TEST nvmf_vfio_user_nvme_compliance 00:18:36.184 ************************************ 00:18:36.184 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:36.184 * Looking for test storage... 00:18:36.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:36.184 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:36.184 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:18:36.184 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.444 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:36.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.444 --rc genhtml_branch_coverage=1 00:18:36.444 --rc genhtml_function_coverage=1 00:18:36.444 --rc genhtml_legend=1 00:18:36.444 --rc geninfo_all_blocks=1 00:18:36.444 --rc geninfo_unexecuted_blocks=1 00:18:36.444 00:18:36.444 ' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:36.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.445 --rc genhtml_branch_coverage=1 00:18:36.445 --rc genhtml_function_coverage=1 00:18:36.445 --rc genhtml_legend=1 00:18:36.445 --rc geninfo_all_blocks=1 00:18:36.445 --rc geninfo_unexecuted_blocks=1 00:18:36.445 00:18:36.445 ' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:36.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.445 --rc genhtml_branch_coverage=1 00:18:36.445 --rc genhtml_function_coverage=1 00:18:36.445 --rc genhtml_legend=1 00:18:36.445 --rc geninfo_all_blocks=1 00:18:36.445 --rc geninfo_unexecuted_blocks=1 00:18:36.445 00:18:36.445 ' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:36.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.445 --rc genhtml_branch_coverage=1 00:18:36.445 --rc genhtml_function_coverage=1 00:18:36.445 --rc genhtml_legend=1 00:18:36.445 --rc geninfo_all_blocks=1 00:18:36.445 --rc geninfo_unexecuted_blocks=1 00:18:36.445 00:18:36.445 ' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=972513 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 972513' 00:18:36.445 Process pid: 972513 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 972513 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 972513 ']' 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.445 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.445 [2024-12-15 12:58:44.236758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:36.445 [2024-12-15 12:58:44.236807] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.445 [2024-12-15 12:58:44.310293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.445 [2024-12-15 12:58:44.331811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.445 [2024-12-15 12:58:44.331855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.445 [2024-12-15 12:58:44.331862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.445 [2024-12-15 12:58:44.331868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.445 [2024-12-15 12:58:44.331873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.445 [2024-12-15 12:58:44.333148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.445 [2024-12-15 12:58:44.333257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.445 [2024-12-15 12:58:44.333258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.705 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.705 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:36.705 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 malloc0 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.641 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:37.899 00:18:37.899 00:18:37.899 CUnit - A unit testing framework for C - Version 2.1-3 00:18:37.899 http://cunit.sourceforge.net/ 00:18:37.899 00:18:37.899 00:18:37.899 Suite: nvme_compliance 00:18:37.899 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-15 12:58:45.669973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.899 [2024-12-15 12:58:45.671331] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:37.899 [2024-12-15 12:58:45.671347] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:37.899 [2024-12-15 12:58:45.671353] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:37.899 [2024-12-15 12:58:45.672990] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.899 passed 00:18:37.899 Test: admin_identify_ctrlr_verify_fused ...[2024-12-15 12:58:45.748548] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.899 [2024-12-15 12:58:45.751562] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.899 passed 00:18:38.158 Test: admin_identify_ns ...[2024-12-15 12:58:45.830165] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.158 [2024-12-15 12:58:45.893839] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:38.158 [2024-12-15 12:58:45.901836] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:38.158 [2024-12-15 12:58:45.922947] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.158 passed 00:18:38.158 Test: admin_get_features_mandatory_features ...[2024-12-15 12:58:45.995740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.158 [2024-12-15 12:58:46.001782] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.158 passed 00:18:38.417 Test: admin_get_features_optional_features ...[2024-12-15 12:58:46.077323] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.417 [2024-12-15 12:58:46.081359] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.417 passed 00:18:38.417 Test: admin_set_features_number_of_queues ...[2024-12-15 12:58:46.159105] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.417 [2024-12-15 12:58:46.264915] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.417 passed 00:18:38.676 Test: admin_get_log_page_mandatory_logs ...[2024-12-15 12:58:46.338617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.676 [2024-12-15 12:58:46.341647] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.676 passed 00:18:38.676 Test: admin_get_log_page_with_lpo ...[2024-12-15 12:58:46.418334] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.676 [2024-12-15 12:58:46.485841] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:38.676 [2024-12-15 12:58:46.498896] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.676 passed 00:18:38.676 Test: fabric_property_get ...[2024-12-15 12:58:46.574509] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.676 [2024-12-15 12:58:46.576734] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:38.676 [2024-12-15 12:58:46.578536] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.934 passed 00:18:38.935 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-15 12:58:46.655057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.935 [2024-12-15 12:58:46.656292] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:38.935 [2024-12-15 12:58:46.660086] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.935 passed 00:18:38.935 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-15 12:58:46.735734] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.935 [2024-12-15 12:58:46.818837] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:38.935 [2024-12-15 12:58:46.834834] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:38.935 [2024-12-15 12:58:46.839912] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.193 passed 00:18:39.193 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-15 12:58:46.915461] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.193 [2024-12-15 12:58:46.916690] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:39.193 [2024-12-15 12:58:46.918479] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.193 passed 00:18:39.193 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-15 12:58:46.997188] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.193 [2024-12-15 12:58:47.073835] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:39.193 [2024-12-15 12:58:47.097832] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.452 [2024-12-15 12:58:47.102913] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.452 passed 00:18:39.452 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-15 12:58:47.176743] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.452 [2024-12-15 12:58:47.177987] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:39.452 [2024-12-15 12:58:47.178010] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:39.452 [2024-12-15 12:58:47.179766] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.452 passed 00:18:39.452 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-15 12:58:47.257493] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.452 [2024-12-15 12:58:47.348867] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:39.452 [2024-12-15 12:58:47.356840] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:39.711 [2024-12-15 12:58:47.364835] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:39.711 [2024-12-15 12:58:47.372833] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:39.711 [2024-12-15 12:58:47.401917] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.711 passed 00:18:39.711 Test: admin_create_io_sq_verify_pc ...[2024-12-15 12:58:47.478658] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.711 [2024-12-15 12:58:47.494841] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:39.711 [2024-12-15 12:58:47.512832] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.711 passed 00:18:39.711 Test: admin_create_io_qp_max_qps ...[2024-12-15 12:58:47.587340] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.088 [2024-12-15 12:58:48.699837] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:41.347 [2024-12-15 12:58:49.078419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.347 passed 00:18:41.347 Test: admin_create_io_sq_shared_cq ...[2024-12-15 12:58:49.154324] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.606 [2024-12-15 12:58:49.285832] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:41.606 [2024-12-15 12:58:49.322894] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.606 passed 00:18:41.606 00:18:41.606 Run Summary: Type Total Ran Passed Failed Inactive 00:18:41.606 suites 1 1 n/a 0 0 00:18:41.606 tests 18 18 18 0 0 00:18:41.606 asserts 360 360 360 0 n/a 00:18:41.606 00:18:41.606 Elapsed time = 1.501 seconds 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 972513 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 972513 ']' 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 972513 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972513 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972513' 00:18:41.606 killing process with pid 972513 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 972513 00:18:41.606 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 972513 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:41.866 00:18:41.866 real 0m5.618s 00:18:41.866 user 0m15.725s 00:18:41.866 sys 0m0.519s 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:41.866 ************************************ 00:18:41.866 END TEST nvmf_vfio_user_nvme_compliance 00:18:41.866 ************************************ 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.866 ************************************ 00:18:41.866 START TEST nvmf_vfio_user_fuzz 00:18:41.866 ************************************ 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:41.866 * Looking for test storage... 00:18:41.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.866 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:42.126 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.127 --rc genhtml_branch_coverage=1 00:18:42.127 --rc genhtml_function_coverage=1 00:18:42.127 --rc genhtml_legend=1 00:18:42.127 --rc geninfo_all_blocks=1 00:18:42.127 --rc geninfo_unexecuted_blocks=1 00:18:42.127 00:18:42.127 ' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.127 --rc genhtml_branch_coverage=1 00:18:42.127 --rc genhtml_function_coverage=1 00:18:42.127 --rc genhtml_legend=1 00:18:42.127 --rc geninfo_all_blocks=1 00:18:42.127 --rc geninfo_unexecuted_blocks=1 00:18:42.127 00:18:42.127 ' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.127 --rc genhtml_branch_coverage=1 00:18:42.127 --rc genhtml_function_coverage=1 00:18:42.127 --rc genhtml_legend=1 00:18:42.127 --rc geninfo_all_blocks=1 00:18:42.127 --rc geninfo_unexecuted_blocks=1 00:18:42.127 00:18:42.127 ' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.127 --rc genhtml_branch_coverage=1 00:18:42.127 --rc genhtml_function_coverage=1 00:18:42.127 --rc genhtml_legend=1 00:18:42.127 --rc geninfo_all_blocks=1 00:18:42.127 --rc geninfo_unexecuted_blocks=1 00:18:42.127 00:18:42.127 ' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=973475 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 973475' 00:18:42.127 Process pid: 973475 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 973475 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 973475 ']' 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.127 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.386 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.386 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:42.386 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 malloc0 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:43.328 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:15.411 Fuzzing completed. Shutting down the fuzz application 00:19:15.411 00:19:15.411 Dumping successful admin opcodes: 00:19:15.411 9, 10, 00:19:15.411 Dumping successful io opcodes: 00:19:15.411 0, 00:19:15.411 NS: 0x20000081ef00 I/O qp, Total commands completed: 1004331, total successful commands: 3938, random_seed: 2920345344 00:19:15.411 NS: 0x20000081ef00 admin qp, Total commands completed: 241216, total successful commands: 56, random_seed: 3836516160 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 973475 ']' 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973475' 00:19:15.411 killing process with pid 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 973475 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:15.411 00:19:15.411 real 0m32.172s 00:19:15.411 user 0m29.274s 00:19:15.411 sys 0m31.502s 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:15.411 ************************************ 00:19:15.411 END TEST nvmf_vfio_user_fuzz 00:19:15.411 ************************************ 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.411 ************************************ 00:19:15.411 START TEST nvmf_auth_target 00:19:15.411 ************************************ 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:15.411 * Looking for test storage... 00:19:15.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.411 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.411 --rc genhtml_branch_coverage=1 00:19:15.411 --rc genhtml_function_coverage=1 00:19:15.411 --rc genhtml_legend=1 00:19:15.411 --rc geninfo_all_blocks=1 00:19:15.411 --rc geninfo_unexecuted_blocks=1 00:19:15.411 00:19:15.411 ' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.411 --rc genhtml_branch_coverage=1 00:19:15.411 --rc genhtml_function_coverage=1 00:19:15.411 --rc genhtml_legend=1 00:19:15.411 --rc geninfo_all_blocks=1 00:19:15.411 --rc geninfo_unexecuted_blocks=1 00:19:15.411 00:19:15.411 ' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.411 --rc genhtml_branch_coverage=1 00:19:15.411 --rc genhtml_function_coverage=1 00:19:15.411 --rc genhtml_legend=1 00:19:15.411 --rc geninfo_all_blocks=1 00:19:15.411 --rc geninfo_unexecuted_blocks=1 00:19:15.411 00:19:15.411 ' 00:19:15.411 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:15.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.411 --rc genhtml_branch_coverage=1 00:19:15.411 --rc genhtml_function_coverage=1 00:19:15.411 --rc genhtml_legend=1 00:19:15.412 --rc geninfo_all_blocks=1 00:19:15.412 --rc geninfo_unexecuted_blocks=1 00:19:15.412 00:19:15.412 ' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.412 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:20.686 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:20.686 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.686 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:20.687 Found net devices under 0000:af:00.0: cvl_0_0 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:20.687 Found net devices under 0000:af:00.1: cvl_0_1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:19:20.687 00:19:20.687 --- 10.0.0.2 ping statistics --- 00:19:20.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.687 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:19:20.687 00:19:20.687 --- 10.0.0.1 ping statistics --- 00:19:20.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.687 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.687 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=981585 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 981585 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981585 ']' 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=981761 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=de891b5c08cb61362a2539a51d2e2df06b5f814e9eb79f5d 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Kti 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key de891b5c08cb61362a2539a51d2e2df06b5f814e9eb79f5d 0 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 de891b5c08cb61362a2539a51d2e2df06b5f814e9eb79f5d 0 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=de891b5c08cb61362a2539a51d2e2df06b5f814e9eb79f5d 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Kti 00:19:20.687 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Kti 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Kti 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1acbfbfd4a0aae1f4fba0fa29d0b1807c0c96afc816c7f0c2613dc9bf8685e8d 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Zdw 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1acbfbfd4a0aae1f4fba0fa29d0b1807c0c96afc816c7f0c2613dc9bf8685e8d 3 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1acbfbfd4a0aae1f4fba0fa29d0b1807c0c96afc816c7f0c2613dc9bf8685e8d 3 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1acbfbfd4a0aae1f4fba0fa29d0b1807c0c96afc816c7f0c2613dc9bf8685e8d 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Zdw 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Zdw 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Zdw 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ec1243bd9ef6a81bdce830943a53ffe9 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kHM 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ec1243bd9ef6a81bdce830943a53ffe9 1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ec1243bd9ef6a81bdce830943a53ffe9 1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ec1243bd9ef6a81bdce830943a53ffe9 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kHM 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kHM 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kHM 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=303bf7d3d757e3d28062f73f68088b0c2ba204f1b51c4fdd 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.G2E 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 303bf7d3d757e3d28062f73f68088b0c2ba204f1b51c4fdd 2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 303bf7d3d757e3d28062f73f68088b0c2ba204f1b51c4fdd 2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=303bf7d3d757e3d28062f73f68088b0c2ba204f1b51c4fdd 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.G2E 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.G2E 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.G2E 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7f9c213e3eb13dd5755f63b3e3177fa361cab147add54e7e 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.07K 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7f9c213e3eb13dd5755f63b3e3177fa361cab147add54e7e 2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7f9c213e3eb13dd5755f63b3e3177fa361cab147add54e7e 2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7f9c213e3eb13dd5755f63b3e3177fa361cab147add54e7e 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:20.688 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.947 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.07K 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.07K 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.07K 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e0e0646197f70ab4c0be29945f00ab96 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dJX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e0e0646197f70ab4c0be29945f00ab96 1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e0e0646197f70ab4c0be29945f00ab96 1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e0e0646197f70ab4c0be29945f00ab96 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dJX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dJX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.dJX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1eb6ed003029fd8cece50db6ea1d1f6f01b67fd184e875d45288515359ea7b1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kp4 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1eb6ed003029fd8cece50db6ea1d1f6f01b67fd184e875d45288515359ea7b1 3 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1eb6ed003029fd8cece50db6ea1d1f6f01b67fd184e875d45288515359ea7b1 3 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1eb6ed003029fd8cece50db6ea1d1f6f01b67fd184e875d45288515359ea7b1 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kp4 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kp4 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kp4 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 981585 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981585 ']' 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.948 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.206 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 981761 /var/tmp/host.sock 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 981761 ']' 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:21.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.207 12:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Kti 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.465 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Kti 00:19:21.466 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Kti 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Zdw ]] 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zdw 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zdw 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zdw 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kHM 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kHM 00:19:21.724 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kHM 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.G2E ]] 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G2E 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G2E 00:19:21.983 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G2E 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.07K 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.07K 00:19:22.242 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.07K 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.dJX ]] 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dJX 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dJX 00:19:22.500 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dJX 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kp4 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kp4 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kp4 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.759 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.018 12:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.276 00:19:23.276 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.276 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.276 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.535 { 00:19:23.535 "cntlid": 1, 00:19:23.535 "qid": 0, 00:19:23.535 "state": "enabled", 00:19:23.535 "thread": "nvmf_tgt_poll_group_000", 00:19:23.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:23.535 "listen_address": { 00:19:23.535 "trtype": "TCP", 00:19:23.535 "adrfam": "IPv4", 00:19:23.535 "traddr": "10.0.0.2", 00:19:23.535 "trsvcid": "4420" 00:19:23.535 }, 00:19:23.535 "peer_address": { 00:19:23.535 "trtype": "TCP", 00:19:23.535 "adrfam": "IPv4", 00:19:23.535 "traddr": "10.0.0.1", 00:19:23.535 "trsvcid": "42168" 00:19:23.535 }, 00:19:23.535 "auth": { 00:19:23.535 "state": "completed", 00:19:23.535 "digest": "sha256", 00:19:23.535 "dhgroup": "null" 00:19:23.535 } 00:19:23.535 } 00:19:23.535 ]' 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.535 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.793 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:23.793 12:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.361 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.620 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.879 00:19:24.879 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.879 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.879 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.138 { 00:19:25.138 "cntlid": 3, 00:19:25.138 "qid": 0, 00:19:25.138 "state": "enabled", 00:19:25.138 "thread": "nvmf_tgt_poll_group_000", 00:19:25.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:25.138 "listen_address": { 00:19:25.138 "trtype": "TCP", 00:19:25.138 "adrfam": "IPv4", 00:19:25.138 "traddr": "10.0.0.2", 00:19:25.138 "trsvcid": "4420" 00:19:25.138 }, 00:19:25.138 "peer_address": { 00:19:25.138 "trtype": "TCP", 00:19:25.138 "adrfam": "IPv4", 00:19:25.138 "traddr": "10.0.0.1", 00:19:25.138 "trsvcid": "42186" 00:19:25.138 }, 00:19:25.138 "auth": { 00:19:25.138 "state": "completed", 00:19:25.138 "digest": "sha256", 00:19:25.138 "dhgroup": "null" 00:19:25.138 } 00:19:25.138 } 00:19:25.138 ]' 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.138 12:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.397 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:25.397 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.965 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.224 12:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.483 00:19:26.483 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.483 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.483 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.742 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.742 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.742 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.743 { 00:19:26.743 "cntlid": 5, 00:19:26.743 "qid": 0, 00:19:26.743 "state": "enabled", 00:19:26.743 "thread": "nvmf_tgt_poll_group_000", 00:19:26.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:26.743 "listen_address": { 00:19:26.743 "trtype": "TCP", 00:19:26.743 "adrfam": "IPv4", 00:19:26.743 "traddr": "10.0.0.2", 00:19:26.743 "trsvcid": "4420" 00:19:26.743 }, 00:19:26.743 "peer_address": { 00:19:26.743 "trtype": "TCP", 00:19:26.743 "adrfam": "IPv4", 00:19:26.743 "traddr": "10.0.0.1", 00:19:26.743 "trsvcid": "42220" 00:19:26.743 }, 00:19:26.743 "auth": { 00:19:26.743 "state": "completed", 00:19:26.743 "digest": "sha256", 00:19:26.743 "dhgroup": "null" 00:19:26.743 } 00:19:26.743 } 00:19:26.743 ]' 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.743 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.002 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:27.002 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.569 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.570 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.570 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.828 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.087 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.087 { 00:19:28.087 "cntlid": 7, 00:19:28.087 "qid": 0, 00:19:28.087 "state": "enabled", 00:19:28.087 "thread": "nvmf_tgt_poll_group_000", 00:19:28.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:28.087 "listen_address": { 00:19:28.087 "trtype": "TCP", 00:19:28.087 "adrfam": "IPv4", 00:19:28.087 "traddr": "10.0.0.2", 00:19:28.087 "trsvcid": "4420" 00:19:28.087 }, 00:19:28.087 "peer_address": { 00:19:28.087 "trtype": "TCP", 00:19:28.087 "adrfam": "IPv4", 00:19:28.087 "traddr": "10.0.0.1", 00:19:28.087 "trsvcid": "42240" 00:19:28.087 }, 00:19:28.087 "auth": { 00:19:28.087 "state": "completed", 00:19:28.087 "digest": "sha256", 00:19:28.087 "dhgroup": "null" 00:19:28.087 } 00:19:28.087 } 00:19:28.087 ]' 00:19:28.087 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.345 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.346 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.604 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:28.604 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.172 12:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.172 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.431 00:19:29.431 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.431 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.431 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.689 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.689 { 00:19:29.689 "cntlid": 9, 00:19:29.689 "qid": 0, 00:19:29.689 "state": "enabled", 00:19:29.689 "thread": "nvmf_tgt_poll_group_000", 00:19:29.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:29.689 "listen_address": { 00:19:29.689 "trtype": "TCP", 00:19:29.689 "adrfam": "IPv4", 00:19:29.689 "traddr": "10.0.0.2", 00:19:29.689 "trsvcid": "4420" 00:19:29.689 }, 00:19:29.689 "peer_address": { 00:19:29.689 "trtype": "TCP", 00:19:29.689 "adrfam": "IPv4", 00:19:29.689 "traddr": "10.0.0.1", 00:19:29.690 "trsvcid": "42284" 00:19:29.690 }, 00:19:29.690 "auth": { 00:19:29.690 "state": "completed", 00:19:29.690 "digest": "sha256", 00:19:29.690 "dhgroup": "ffdhe2048" 00:19:29.690 } 00:19:29.690 } 00:19:29.690 ]' 00:19:29.690 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.690 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.690 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.987 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.987 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.987 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.987 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.987 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.322 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:30.322 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.641 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.900 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.901 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.159 00:19:31.159 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.159 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.159 12:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.418 { 00:19:31.418 "cntlid": 11, 00:19:31.418 "qid": 0, 00:19:31.418 "state": "enabled", 00:19:31.418 "thread": "nvmf_tgt_poll_group_000", 00:19:31.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:31.418 "listen_address": { 00:19:31.418 "trtype": "TCP", 00:19:31.418 "adrfam": "IPv4", 00:19:31.418 "traddr": "10.0.0.2", 00:19:31.418 "trsvcid": "4420" 00:19:31.418 }, 00:19:31.418 "peer_address": { 00:19:31.418 "trtype": "TCP", 00:19:31.418 "adrfam": "IPv4", 00:19:31.418 "traddr": "10.0.0.1", 00:19:31.418 "trsvcid": "42318" 00:19:31.418 }, 00:19:31.418 "auth": { 00:19:31.418 "state": "completed", 00:19:31.418 "digest": "sha256", 00:19:31.418 "dhgroup": "ffdhe2048" 00:19:31.418 } 00:19:31.418 } 00:19:31.418 ]' 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.418 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.676 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:31.676 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:32.243 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.244 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.502 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.761 00:19:32.761 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.761 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.761 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.019 { 00:19:33.019 "cntlid": 13, 00:19:33.019 "qid": 0, 00:19:33.019 "state": "enabled", 00:19:33.019 "thread": "nvmf_tgt_poll_group_000", 00:19:33.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:33.019 "listen_address": { 00:19:33.019 "trtype": "TCP", 00:19:33.019 "adrfam": "IPv4", 00:19:33.019 "traddr": "10.0.0.2", 00:19:33.019 "trsvcid": "4420" 00:19:33.019 }, 00:19:33.019 "peer_address": { 00:19:33.019 "trtype": "TCP", 00:19:33.019 "adrfam": "IPv4", 00:19:33.019 "traddr": "10.0.0.1", 00:19:33.019 "trsvcid": "42342" 00:19:33.019 }, 00:19:33.019 "auth": { 00:19:33.019 "state": "completed", 00:19:33.019 "digest": "sha256", 00:19:33.019 "dhgroup": "ffdhe2048" 00:19:33.019 } 00:19:33.019 } 00:19:33.019 ]' 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.019 12:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:33.278 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.845 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.104 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.363 00:19:34.363 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.363 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.363 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.623 { 00:19:34.623 "cntlid": 15, 00:19:34.623 "qid": 0, 00:19:34.623 "state": "enabled", 00:19:34.623 "thread": "nvmf_tgt_poll_group_000", 00:19:34.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:34.623 "listen_address": { 00:19:34.623 "trtype": "TCP", 00:19:34.623 "adrfam": "IPv4", 00:19:34.623 "traddr": "10.0.0.2", 00:19:34.623 "trsvcid": "4420" 00:19:34.623 }, 00:19:34.623 "peer_address": { 00:19:34.623 "trtype": "TCP", 00:19:34.623 "adrfam": "IPv4", 00:19:34.623 "traddr": "10.0.0.1", 00:19:34.623 "trsvcid": "44734" 00:19:34.623 }, 00:19:34.623 "auth": { 00:19:34.623 "state": "completed", 00:19:34.623 "digest": "sha256", 00:19:34.623 "dhgroup": "ffdhe2048" 00:19:34.623 } 00:19:34.623 } 00:19:34.623 ]' 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.623 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.882 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:34.882 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.450 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.709 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.968 00:19:35.968 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.969 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.969 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.227 { 00:19:36.227 "cntlid": 17, 00:19:36.227 "qid": 0, 00:19:36.227 "state": "enabled", 00:19:36.227 "thread": "nvmf_tgt_poll_group_000", 00:19:36.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:36.227 "listen_address": { 00:19:36.227 "trtype": "TCP", 00:19:36.227 "adrfam": "IPv4", 00:19:36.227 "traddr": "10.0.0.2", 00:19:36.227 "trsvcid": "4420" 00:19:36.227 }, 00:19:36.227 "peer_address": { 00:19:36.227 "trtype": "TCP", 00:19:36.227 "adrfam": "IPv4", 00:19:36.227 "traddr": "10.0.0.1", 00:19:36.227 "trsvcid": "44764" 00:19:36.227 }, 00:19:36.227 "auth": { 00:19:36.227 "state": "completed", 00:19:36.227 "digest": "sha256", 00:19:36.227 "dhgroup": "ffdhe3072" 00:19:36.227 } 00:19:36.227 } 00:19:36.227 ]' 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.227 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.227 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.227 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.227 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.485 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:36.485 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.053 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.054 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.054 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.054 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.314 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.314 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.314 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.314 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.314 00:19:37.314 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.314 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.314 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.573 { 00:19:37.573 "cntlid": 19, 00:19:37.573 "qid": 0, 00:19:37.573 "state": "enabled", 00:19:37.573 "thread": "nvmf_tgt_poll_group_000", 00:19:37.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:37.573 "listen_address": { 00:19:37.573 "trtype": "TCP", 00:19:37.573 "adrfam": "IPv4", 00:19:37.573 "traddr": "10.0.0.2", 00:19:37.573 "trsvcid": "4420" 00:19:37.573 }, 00:19:37.573 "peer_address": { 00:19:37.573 "trtype": "TCP", 00:19:37.573 "adrfam": "IPv4", 00:19:37.573 "traddr": "10.0.0.1", 00:19:37.573 "trsvcid": "44784" 00:19:37.573 }, 00:19:37.573 "auth": { 00:19:37.573 "state": "completed", 00:19:37.573 "digest": "sha256", 00:19:37.573 "dhgroup": "ffdhe3072" 00:19:37.573 } 00:19:37.573 } 00:19:37.573 ]' 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.573 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.832 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.832 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.832 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.832 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.832 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.091 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:38.091 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.659 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.918 00:19:38.918 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.918 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.918 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.177 { 00:19:39.177 "cntlid": 21, 00:19:39.177 "qid": 0, 00:19:39.177 "state": "enabled", 00:19:39.177 "thread": "nvmf_tgt_poll_group_000", 00:19:39.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:39.177 "listen_address": { 00:19:39.177 "trtype": "TCP", 00:19:39.177 "adrfam": "IPv4", 00:19:39.177 "traddr": "10.0.0.2", 00:19:39.177 "trsvcid": "4420" 00:19:39.177 }, 00:19:39.177 "peer_address": { 00:19:39.177 "trtype": "TCP", 00:19:39.177 "adrfam": "IPv4", 00:19:39.177 "traddr": "10.0.0.1", 00:19:39.177 "trsvcid": "44822" 00:19:39.177 }, 00:19:39.177 "auth": { 00:19:39.177 "state": "completed", 00:19:39.177 "digest": "sha256", 00:19:39.177 "dhgroup": "ffdhe3072" 00:19:39.177 } 00:19:39.177 } 00:19:39.177 ]' 00:19:39.177 12:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.177 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.177 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.177 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.177 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.436 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.436 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.436 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.436 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:39.436 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.004 12:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.263 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.522 00:19:40.522 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.522 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.522 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.781 { 00:19:40.781 "cntlid": 23, 00:19:40.781 "qid": 0, 00:19:40.781 "state": "enabled", 00:19:40.781 "thread": "nvmf_tgt_poll_group_000", 00:19:40.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:40.781 "listen_address": { 00:19:40.781 "trtype": "TCP", 00:19:40.781 "adrfam": "IPv4", 00:19:40.781 "traddr": "10.0.0.2", 00:19:40.781 "trsvcid": "4420" 00:19:40.781 }, 00:19:40.781 "peer_address": { 00:19:40.781 "trtype": "TCP", 00:19:40.781 "adrfam": "IPv4", 00:19:40.781 "traddr": "10.0.0.1", 00:19:40.781 "trsvcid": "44862" 00:19:40.781 }, 00:19:40.781 "auth": { 00:19:40.781 "state": "completed", 00:19:40.781 "digest": "sha256", 00:19:40.781 "dhgroup": "ffdhe3072" 00:19:40.781 } 00:19:40.781 } 00:19:40.781 ]' 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.781 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.040 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:41.040 12:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.608 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.867 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.126 00:19:42.126 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.126 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.126 12:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.385 { 00:19:42.385 "cntlid": 25, 00:19:42.385 "qid": 0, 00:19:42.385 "state": "enabled", 00:19:42.385 "thread": "nvmf_tgt_poll_group_000", 00:19:42.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.385 "listen_address": { 00:19:42.385 "trtype": "TCP", 00:19:42.385 "adrfam": "IPv4", 00:19:42.385 "traddr": "10.0.0.2", 00:19:42.385 "trsvcid": "4420" 00:19:42.385 }, 00:19:42.385 "peer_address": { 00:19:42.385 "trtype": "TCP", 00:19:42.385 "adrfam": "IPv4", 00:19:42.385 "traddr": "10.0.0.1", 00:19:42.385 "trsvcid": "44894" 00:19:42.385 }, 00:19:42.385 "auth": { 00:19:42.385 "state": "completed", 00:19:42.385 "digest": "sha256", 00:19:42.385 "dhgroup": "ffdhe4096" 00:19:42.385 } 00:19:42.385 } 00:19:42.385 ]' 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.385 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.644 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:42.644 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.212 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.472 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.731 00:19:43.731 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.731 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.731 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.989 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.989 { 00:19:43.989 "cntlid": 27, 00:19:43.989 "qid": 0, 00:19:43.989 "state": "enabled", 00:19:43.989 "thread": "nvmf_tgt_poll_group_000", 00:19:43.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:43.989 "listen_address": { 00:19:43.989 "trtype": "TCP", 00:19:43.989 "adrfam": "IPv4", 00:19:43.989 "traddr": "10.0.0.2", 00:19:43.990 "trsvcid": "4420" 00:19:43.990 }, 00:19:43.990 "peer_address": { 00:19:43.990 "trtype": "TCP", 00:19:43.990 "adrfam": "IPv4", 00:19:43.990 "traddr": "10.0.0.1", 00:19:43.990 "trsvcid": "51158" 00:19:43.990 }, 00:19:43.990 "auth": { 00:19:43.990 "state": "completed", 00:19:43.990 "digest": "sha256", 00:19:43.990 "dhgroup": "ffdhe4096" 00:19:43.990 } 00:19:43.990 } 00:19:43.990 ]' 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.990 12:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.249 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:44.249 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.817 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.076 12:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.335 00:19:45.335 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.335 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.335 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.594 { 00:19:45.594 "cntlid": 29, 00:19:45.594 "qid": 0, 00:19:45.594 "state": "enabled", 00:19:45.594 "thread": "nvmf_tgt_poll_group_000", 00:19:45.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.594 "listen_address": { 00:19:45.594 "trtype": "TCP", 00:19:45.594 "adrfam": "IPv4", 00:19:45.594 "traddr": "10.0.0.2", 00:19:45.594 "trsvcid": "4420" 00:19:45.594 }, 00:19:45.594 "peer_address": { 00:19:45.594 "trtype": "TCP", 00:19:45.594 "adrfam": "IPv4", 00:19:45.594 "traddr": "10.0.0.1", 00:19:45.594 "trsvcid": "51176" 00:19:45.594 }, 00:19:45.594 "auth": { 00:19:45.594 "state": "completed", 00:19:45.594 "digest": "sha256", 00:19:45.594 "dhgroup": "ffdhe4096" 00:19:45.594 } 00:19:45.594 } 00:19:45.594 ]' 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.594 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.853 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:45.853 12:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.421 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.680 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.939 00:19:46.939 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.939 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.939 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.198 { 00:19:47.198 "cntlid": 31, 00:19:47.198 "qid": 0, 00:19:47.198 "state": "enabled", 00:19:47.198 "thread": "nvmf_tgt_poll_group_000", 00:19:47.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:47.198 "listen_address": { 00:19:47.198 "trtype": "TCP", 00:19:47.198 "adrfam": "IPv4", 00:19:47.198 "traddr": "10.0.0.2", 00:19:47.198 "trsvcid": "4420" 00:19:47.198 }, 00:19:47.198 "peer_address": { 00:19:47.198 "trtype": "TCP", 00:19:47.198 "adrfam": "IPv4", 00:19:47.198 "traddr": "10.0.0.1", 00:19:47.198 "trsvcid": "51194" 00:19:47.198 }, 00:19:47.198 "auth": { 00:19:47.198 "state": "completed", 00:19:47.198 "digest": "sha256", 00:19:47.198 "dhgroup": "ffdhe4096" 00:19:47.198 } 00:19:47.198 } 00:19:47.198 ]' 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.198 12:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.198 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.198 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.198 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.198 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.198 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.456 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:47.456 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.023 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.282 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.541 00:19:48.541 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.541 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.541 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.800 { 00:19:48.800 "cntlid": 33, 00:19:48.800 "qid": 0, 00:19:48.800 "state": "enabled", 00:19:48.800 "thread": "nvmf_tgt_poll_group_000", 00:19:48.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.800 "listen_address": { 00:19:48.800 "trtype": "TCP", 00:19:48.800 "adrfam": "IPv4", 00:19:48.800 "traddr": "10.0.0.2", 00:19:48.800 "trsvcid": "4420" 00:19:48.800 }, 00:19:48.800 "peer_address": { 00:19:48.800 "trtype": "TCP", 00:19:48.800 "adrfam": "IPv4", 00:19:48.800 "traddr": "10.0.0.1", 00:19:48.800 "trsvcid": "51226" 00:19:48.800 }, 00:19:48.800 "auth": { 00:19:48.800 "state": "completed", 00:19:48.800 "digest": "sha256", 00:19:48.800 "dhgroup": "ffdhe6144" 00:19:48.800 } 00:19:48.800 } 00:19:48.800 ]' 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.800 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:49.059 12:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.626 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.885 12:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.460 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.460 { 00:19:50.460 "cntlid": 35, 00:19:50.460 "qid": 0, 00:19:50.460 "state": "enabled", 00:19:50.460 "thread": "nvmf_tgt_poll_group_000", 00:19:50.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.460 "listen_address": { 00:19:50.460 "trtype": "TCP", 00:19:50.460 "adrfam": "IPv4", 00:19:50.460 "traddr": "10.0.0.2", 00:19:50.460 "trsvcid": "4420" 00:19:50.460 }, 00:19:50.460 "peer_address": { 00:19:50.460 "trtype": "TCP", 00:19:50.460 "adrfam": "IPv4", 00:19:50.460 "traddr": "10.0.0.1", 00:19:50.460 "trsvcid": "51246" 00:19:50.460 }, 00:19:50.460 "auth": { 00:19:50.460 "state": "completed", 00:19:50.460 "digest": "sha256", 00:19:50.460 "dhgroup": "ffdhe6144" 00:19:50.460 } 00:19:50.460 } 00:19:50.460 ]' 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.460 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.720 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.720 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.720 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.720 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.720 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.979 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:50.979 12:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.546 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.115 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.115 { 00:19:52.115 "cntlid": 37, 00:19:52.115 "qid": 0, 00:19:52.115 "state": "enabled", 00:19:52.115 "thread": "nvmf_tgt_poll_group_000", 00:19:52.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:52.115 "listen_address": { 00:19:52.115 "trtype": "TCP", 00:19:52.115 "adrfam": "IPv4", 00:19:52.115 "traddr": "10.0.0.2", 00:19:52.115 "trsvcid": "4420" 00:19:52.115 }, 00:19:52.115 "peer_address": { 00:19:52.115 "trtype": "TCP", 00:19:52.115 "adrfam": "IPv4", 00:19:52.115 "traddr": "10.0.0.1", 00:19:52.115 "trsvcid": "51274" 00:19:52.115 }, 00:19:52.115 "auth": { 00:19:52.115 "state": "completed", 00:19:52.115 "digest": "sha256", 00:19:52.115 "dhgroup": "ffdhe6144" 00:19:52.115 } 00:19:52.115 } 00:19:52.115 ]' 00:19:52.115 12:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.373 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.373 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.373 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.373 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.373 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.374 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.374 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.634 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:52.634 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.200 13:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.200 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.458 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.458 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.458 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.458 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.717 00:19:53.717 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.717 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.717 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.977 { 00:19:53.977 "cntlid": 39, 00:19:53.977 "qid": 0, 00:19:53.977 "state": "enabled", 00:19:53.977 "thread": "nvmf_tgt_poll_group_000", 00:19:53.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.977 "listen_address": { 00:19:53.977 "trtype": "TCP", 00:19:53.977 "adrfam": "IPv4", 00:19:53.977 "traddr": "10.0.0.2", 00:19:53.977 "trsvcid": "4420" 00:19:53.977 }, 00:19:53.977 "peer_address": { 00:19:53.977 "trtype": "TCP", 00:19:53.977 "adrfam": "IPv4", 00:19:53.977 "traddr": "10.0.0.1", 00:19:53.977 "trsvcid": "38180" 00:19:53.977 }, 00:19:53.977 "auth": { 00:19:53.977 "state": "completed", 00:19:53.977 "digest": "sha256", 00:19:53.977 "dhgroup": "ffdhe6144" 00:19:53.977 } 00:19:53.977 } 00:19:53.977 ]' 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.977 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.236 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:54.236 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:54.805 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.062 13:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.629 00:19:55.629 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.630 { 00:19:55.630 "cntlid": 41, 00:19:55.630 "qid": 0, 00:19:55.630 "state": "enabled", 00:19:55.630 "thread": "nvmf_tgt_poll_group_000", 00:19:55.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.630 "listen_address": { 00:19:55.630 "trtype": "TCP", 00:19:55.630 "adrfam": "IPv4", 00:19:55.630 "traddr": "10.0.0.2", 00:19:55.630 "trsvcid": "4420" 00:19:55.630 }, 00:19:55.630 "peer_address": { 00:19:55.630 "trtype": "TCP", 00:19:55.630 "adrfam": "IPv4", 00:19:55.630 "traddr": "10.0.0.1", 00:19:55.630 "trsvcid": "38198" 00:19:55.630 }, 00:19:55.630 "auth": { 00:19:55.630 "state": "completed", 00:19:55.630 "digest": "sha256", 00:19:55.630 "dhgroup": "ffdhe8192" 00:19:55.630 } 00:19:55.630 } 00:19:55.630 ]' 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.630 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.889 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.889 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.889 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.889 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:55.889 13:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:19:56.455 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.714 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.282 00:19:57.282 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.282 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.282 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.540 { 00:19:57.540 "cntlid": 43, 00:19:57.540 "qid": 0, 00:19:57.540 "state": "enabled", 00:19:57.540 "thread": "nvmf_tgt_poll_group_000", 00:19:57.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.540 "listen_address": { 00:19:57.540 "trtype": "TCP", 00:19:57.540 "adrfam": "IPv4", 00:19:57.540 "traddr": "10.0.0.2", 00:19:57.540 "trsvcid": "4420" 00:19:57.540 }, 00:19:57.540 "peer_address": { 00:19:57.540 "trtype": "TCP", 00:19:57.540 "adrfam": "IPv4", 00:19:57.540 "traddr": "10.0.0.1", 00:19:57.540 "trsvcid": "38218" 00:19:57.540 }, 00:19:57.540 "auth": { 00:19:57.540 "state": "completed", 00:19:57.540 "digest": "sha256", 00:19:57.540 "dhgroup": "ffdhe8192" 00:19:57.540 } 00:19:57.540 } 00:19:57.540 ]' 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.540 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.799 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:57.799 13:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.367 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.626 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.194 00:19:59.194 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.194 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.194 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.194 { 00:19:59.194 "cntlid": 45, 00:19:59.194 "qid": 0, 00:19:59.194 "state": "enabled", 00:19:59.194 "thread": "nvmf_tgt_poll_group_000", 00:19:59.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.194 "listen_address": { 00:19:59.194 "trtype": "TCP", 00:19:59.194 "adrfam": "IPv4", 00:19:59.194 "traddr": "10.0.0.2", 00:19:59.194 "trsvcid": "4420" 00:19:59.194 }, 00:19:59.194 "peer_address": { 00:19:59.194 "trtype": "TCP", 00:19:59.194 "adrfam": "IPv4", 00:19:59.194 "traddr": "10.0.0.1", 00:19:59.194 "trsvcid": "38256" 00:19:59.194 }, 00:19:59.194 "auth": { 00:19:59.194 "state": "completed", 00:19:59.194 "digest": "sha256", 00:19:59.194 "dhgroup": "ffdhe8192" 00:19:59.194 } 00:19:59.194 } 00:19:59.194 ]' 00:19:59.194 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.453 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.711 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:19:59.711 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.279 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.279 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.538 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.538 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.538 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.538 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.797 00:20:00.797 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.797 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.797 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.056 { 00:20:01.056 "cntlid": 47, 00:20:01.056 "qid": 0, 00:20:01.056 "state": "enabled", 00:20:01.056 "thread": "nvmf_tgt_poll_group_000", 00:20:01.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.056 "listen_address": { 00:20:01.056 "trtype": "TCP", 00:20:01.056 "adrfam": "IPv4", 00:20:01.056 "traddr": "10.0.0.2", 00:20:01.056 "trsvcid": "4420" 00:20:01.056 }, 00:20:01.056 "peer_address": { 00:20:01.056 "trtype": "TCP", 00:20:01.056 "adrfam": "IPv4", 00:20:01.056 "traddr": "10.0.0.1", 00:20:01.056 "trsvcid": "38288" 00:20:01.056 }, 00:20:01.056 "auth": { 00:20:01.056 "state": "completed", 00:20:01.056 "digest": "sha256", 00:20:01.056 "dhgroup": "ffdhe8192" 00:20:01.056 } 00:20:01.056 } 00:20:01.056 ]' 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.056 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.315 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.315 13:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.315 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.315 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.315 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.573 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:01.573 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.141 13:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.141 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.400 00:20:02.400 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.400 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.400 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.659 { 00:20:02.659 "cntlid": 49, 00:20:02.659 "qid": 0, 00:20:02.659 "state": "enabled", 00:20:02.659 "thread": "nvmf_tgt_poll_group_000", 00:20:02.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.659 "listen_address": { 00:20:02.659 "trtype": "TCP", 00:20:02.659 "adrfam": "IPv4", 00:20:02.659 "traddr": "10.0.0.2", 00:20:02.659 "trsvcid": "4420" 00:20:02.659 }, 00:20:02.659 "peer_address": { 00:20:02.659 "trtype": "TCP", 00:20:02.659 "adrfam": "IPv4", 00:20:02.659 "traddr": "10.0.0.1", 00:20:02.659 "trsvcid": "38334" 00:20:02.659 }, 00:20:02.659 "auth": { 00:20:02.659 "state": "completed", 00:20:02.659 "digest": "sha384", 00:20:02.659 "dhgroup": "null" 00:20:02.659 } 00:20:02.659 } 00:20:02.659 ]' 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.659 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.917 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.917 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.917 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.917 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:02.917 13:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.485 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.744 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.003 00:20:04.003 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.003 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.003 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.262 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.262 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.262 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 13:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.262 { 00:20:04.262 "cntlid": 51, 00:20:04.262 "qid": 0, 00:20:04.262 "state": "enabled", 00:20:04.262 "thread": "nvmf_tgt_poll_group_000", 00:20:04.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.262 "listen_address": { 00:20:04.262 "trtype": "TCP", 00:20:04.262 "adrfam": "IPv4", 00:20:04.262 "traddr": "10.0.0.2", 00:20:04.262 "trsvcid": "4420" 00:20:04.262 }, 00:20:04.262 "peer_address": { 00:20:04.262 "trtype": "TCP", 00:20:04.262 "adrfam": "IPv4", 00:20:04.262 "traddr": "10.0.0.1", 00:20:04.262 "trsvcid": "53934" 00:20:04.262 }, 00:20:04.262 "auth": { 00:20:04.262 "state": "completed", 00:20:04.262 "digest": "sha384", 00:20:04.262 "dhgroup": "null" 00:20:04.262 } 00:20:04.262 } 00:20:04.262 ]' 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.262 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.263 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.521 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:04.521 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.089 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.348 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.607 00:20:05.607 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.607 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.607 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.867 { 00:20:05.867 "cntlid": 53, 00:20:05.867 "qid": 0, 00:20:05.867 "state": "enabled", 00:20:05.867 "thread": "nvmf_tgt_poll_group_000", 00:20:05.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.867 "listen_address": { 00:20:05.867 "trtype": "TCP", 00:20:05.867 "adrfam": "IPv4", 00:20:05.867 "traddr": "10.0.0.2", 00:20:05.867 "trsvcid": "4420" 00:20:05.867 }, 00:20:05.867 "peer_address": { 00:20:05.867 "trtype": "TCP", 00:20:05.867 "adrfam": "IPv4", 00:20:05.867 "traddr": "10.0.0.1", 00:20:05.867 "trsvcid": "53974" 00:20:05.867 }, 00:20:05.867 "auth": { 00:20:05.867 "state": "completed", 00:20:05.867 "digest": "sha384", 00:20:05.867 "dhgroup": "null" 00:20:05.867 } 00:20:05.867 } 00:20:05.867 ]' 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.867 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.126 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:06.126 13:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.694 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.953 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.212 00:20:07.212 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.212 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.212 13:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.471 { 00:20:07.471 "cntlid": 55, 00:20:07.471 "qid": 0, 00:20:07.471 "state": "enabled", 00:20:07.471 "thread": "nvmf_tgt_poll_group_000", 00:20:07.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.471 "listen_address": { 00:20:07.471 "trtype": "TCP", 00:20:07.471 "adrfam": "IPv4", 00:20:07.471 "traddr": "10.0.0.2", 00:20:07.471 "trsvcid": "4420" 00:20:07.471 }, 00:20:07.471 "peer_address": { 00:20:07.471 "trtype": "TCP", 00:20:07.471 "adrfam": "IPv4", 00:20:07.471 "traddr": "10.0.0.1", 00:20:07.471 "trsvcid": "54006" 00:20:07.471 }, 00:20:07.471 "auth": { 00:20:07.471 "state": "completed", 00:20:07.471 "digest": "sha384", 00:20:07.471 "dhgroup": "null" 00:20:07.471 } 00:20:07.471 } 00:20:07.471 ]' 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.471 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.779 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:07.779 13:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.368 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.626 00:20:08.626 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.626 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.626 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.885 { 00:20:08.885 "cntlid": 57, 00:20:08.885 "qid": 0, 00:20:08.885 "state": "enabled", 00:20:08.885 "thread": "nvmf_tgt_poll_group_000", 00:20:08.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.885 "listen_address": { 00:20:08.885 "trtype": "TCP", 00:20:08.885 "adrfam": "IPv4", 00:20:08.885 "traddr": "10.0.0.2", 00:20:08.885 "trsvcid": "4420" 00:20:08.885 }, 00:20:08.885 "peer_address": { 00:20:08.885 "trtype": "TCP", 00:20:08.885 "adrfam": "IPv4", 00:20:08.885 "traddr": "10.0.0.1", 00:20:08.885 "trsvcid": "54040" 00:20:08.885 }, 00:20:08.885 "auth": { 00:20:08.885 "state": "completed", 00:20:08.885 "digest": "sha384", 00:20:08.885 "dhgroup": "ffdhe2048" 00:20:08.885 } 00:20:08.885 } 00:20:08.885 ]' 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.885 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.143 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.143 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.143 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.143 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.144 13:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.144 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:09.144 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:09.710 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.711 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.711 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.711 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.969 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.969 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.969 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.969 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.969 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.970 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.228 00:20:10.228 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.228 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.228 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.487 { 00:20:10.487 "cntlid": 59, 00:20:10.487 "qid": 0, 00:20:10.487 "state": "enabled", 00:20:10.487 "thread": "nvmf_tgt_poll_group_000", 00:20:10.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.487 "listen_address": { 00:20:10.487 "trtype": "TCP", 00:20:10.487 "adrfam": "IPv4", 00:20:10.487 "traddr": "10.0.0.2", 00:20:10.487 "trsvcid": "4420" 00:20:10.487 }, 00:20:10.487 "peer_address": { 00:20:10.487 "trtype": "TCP", 00:20:10.487 "adrfam": "IPv4", 00:20:10.487 "traddr": "10.0.0.1", 00:20:10.487 "trsvcid": "54070" 00:20:10.487 }, 00:20:10.487 "auth": { 00:20:10.487 "state": "completed", 00:20:10.487 "digest": "sha384", 00:20:10.487 "dhgroup": "ffdhe2048" 00:20:10.487 } 00:20:10.487 } 00:20:10.487 ]' 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.487 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:10.746 13:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.314 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.573 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.832 00:20:11.832 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.832 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.832 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.091 { 00:20:12.091 "cntlid": 61, 00:20:12.091 "qid": 0, 00:20:12.091 "state": "enabled", 00:20:12.091 "thread": "nvmf_tgt_poll_group_000", 00:20:12.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.091 "listen_address": { 00:20:12.091 "trtype": "TCP", 00:20:12.091 "adrfam": "IPv4", 00:20:12.091 "traddr": "10.0.0.2", 00:20:12.091 "trsvcid": "4420" 00:20:12.091 }, 00:20:12.091 "peer_address": { 00:20:12.091 "trtype": "TCP", 00:20:12.091 "adrfam": "IPv4", 00:20:12.091 "traddr": "10.0.0.1", 00:20:12.091 "trsvcid": "54086" 00:20:12.091 }, 00:20:12.091 "auth": { 00:20:12.091 "state": "completed", 00:20:12.091 "digest": "sha384", 00:20:12.091 "dhgroup": "ffdhe2048" 00:20:12.091 } 00:20:12.091 } 00:20:12.091 ]' 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.091 13:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.350 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.350 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.350 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.350 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:12.350 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.916 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:13.175 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.176 13:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.176 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.176 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.176 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.176 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.434 00:20:13.434 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.434 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.434 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.693 { 00:20:13.693 "cntlid": 63, 00:20:13.693 "qid": 0, 00:20:13.693 "state": "enabled", 00:20:13.693 "thread": "nvmf_tgt_poll_group_000", 00:20:13.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.693 "listen_address": { 00:20:13.693 "trtype": "TCP", 00:20:13.693 "adrfam": "IPv4", 00:20:13.693 "traddr": "10.0.0.2", 00:20:13.693 "trsvcid": "4420" 00:20:13.693 }, 00:20:13.693 "peer_address": { 00:20:13.693 "trtype": "TCP", 00:20:13.693 "adrfam": "IPv4", 00:20:13.693 "traddr": "10.0.0.1", 00:20:13.693 "trsvcid": "44330" 00:20:13.693 }, 00:20:13.693 "auth": { 00:20:13.693 "state": "completed", 00:20:13.693 "digest": "sha384", 00:20:13.693 "dhgroup": "ffdhe2048" 00:20:13.693 } 00:20:13.693 } 00:20:13.693 ]' 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.693 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.951 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:13.951 13:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.519 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.778 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.037 00:20:15.037 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.037 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.037 13:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.295 { 00:20:15.295 "cntlid": 65, 00:20:15.295 "qid": 0, 00:20:15.295 "state": "enabled", 00:20:15.295 "thread": "nvmf_tgt_poll_group_000", 00:20:15.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.295 "listen_address": { 00:20:15.295 "trtype": "TCP", 00:20:15.295 "adrfam": "IPv4", 00:20:15.295 "traddr": "10.0.0.2", 00:20:15.295 "trsvcid": "4420" 00:20:15.295 }, 00:20:15.295 "peer_address": { 00:20:15.295 "trtype": "TCP", 00:20:15.295 "adrfam": "IPv4", 00:20:15.295 "traddr": "10.0.0.1", 00:20:15.295 "trsvcid": "44344" 00:20:15.295 }, 00:20:15.295 "auth": { 00:20:15.295 "state": "completed", 00:20:15.295 "digest": "sha384", 00:20:15.295 "dhgroup": "ffdhe3072" 00:20:15.295 } 00:20:15.295 } 00:20:15.295 ]' 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.295 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.296 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.296 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.554 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:15.554 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.122 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.381 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.640 00:20:16.640 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.640 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.640 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.899 { 00:20:16.899 "cntlid": 67, 00:20:16.899 "qid": 0, 00:20:16.899 "state": "enabled", 00:20:16.899 "thread": "nvmf_tgt_poll_group_000", 00:20:16.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.899 "listen_address": { 00:20:16.899 "trtype": "TCP", 00:20:16.899 "adrfam": "IPv4", 00:20:16.899 "traddr": "10.0.0.2", 00:20:16.899 "trsvcid": "4420" 00:20:16.899 }, 00:20:16.899 "peer_address": { 00:20:16.899 "trtype": "TCP", 00:20:16.899 "adrfam": "IPv4", 00:20:16.899 "traddr": "10.0.0.1", 00:20:16.899 "trsvcid": "44356" 00:20:16.899 }, 00:20:16.899 "auth": { 00:20:16.899 "state": "completed", 00:20:16.899 "digest": "sha384", 00:20:16.899 "dhgroup": "ffdhe3072" 00:20:16.899 } 00:20:16.899 } 00:20:16.899 ]' 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.899 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.158 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:17.158 13:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.723 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.982 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.241 00:20:18.241 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.241 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.241 13:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.500 { 00:20:18.500 "cntlid": 69, 00:20:18.500 "qid": 0, 00:20:18.500 "state": "enabled", 00:20:18.500 "thread": "nvmf_tgt_poll_group_000", 00:20:18.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.500 "listen_address": { 00:20:18.500 "trtype": "TCP", 00:20:18.500 "adrfam": "IPv4", 00:20:18.500 "traddr": "10.0.0.2", 00:20:18.500 "trsvcid": "4420" 00:20:18.500 }, 00:20:18.500 "peer_address": { 00:20:18.500 "trtype": "TCP", 00:20:18.500 "adrfam": "IPv4", 00:20:18.500 "traddr": "10.0.0.1", 00:20:18.500 "trsvcid": "44388" 00:20:18.500 }, 00:20:18.500 "auth": { 00:20:18.500 "state": "completed", 00:20:18.500 "digest": "sha384", 00:20:18.500 "dhgroup": "ffdhe3072" 00:20:18.500 } 00:20:18.500 } 00:20:18.500 ]' 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.500 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.758 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:18.759 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.326 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.584 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:19.584 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.584 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.584 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.585 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.843 00:20:19.843 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.843 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.843 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.102 { 00:20:20.102 "cntlid": 71, 00:20:20.102 "qid": 0, 00:20:20.102 "state": "enabled", 00:20:20.102 "thread": "nvmf_tgt_poll_group_000", 00:20:20.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.102 "listen_address": { 00:20:20.102 "trtype": "TCP", 00:20:20.102 "adrfam": "IPv4", 00:20:20.102 "traddr": "10.0.0.2", 00:20:20.102 "trsvcid": "4420" 00:20:20.102 }, 00:20:20.102 "peer_address": { 00:20:20.102 "trtype": "TCP", 00:20:20.102 "adrfam": "IPv4", 00:20:20.102 "traddr": "10.0.0.1", 00:20:20.102 "trsvcid": "44406" 00:20:20.102 }, 00:20:20.102 "auth": { 00:20:20.102 "state": "completed", 00:20:20.102 "digest": "sha384", 00:20:20.102 "dhgroup": "ffdhe3072" 00:20:20.102 } 00:20:20.102 } 00:20:20.102 ]' 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.102 13:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.360 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:20.360 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.927 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.186 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.445 00:20:21.445 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.445 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.445 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.703 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.703 { 00:20:21.703 "cntlid": 73, 00:20:21.703 "qid": 0, 00:20:21.703 "state": "enabled", 00:20:21.703 "thread": "nvmf_tgt_poll_group_000", 00:20:21.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.703 "listen_address": { 00:20:21.703 "trtype": "TCP", 00:20:21.703 "adrfam": "IPv4", 00:20:21.703 "traddr": "10.0.0.2", 00:20:21.703 "trsvcid": "4420" 00:20:21.703 }, 00:20:21.703 "peer_address": { 00:20:21.703 "trtype": "TCP", 00:20:21.704 "adrfam": "IPv4", 00:20:21.704 "traddr": "10.0.0.1", 00:20:21.704 "trsvcid": "44442" 00:20:21.704 }, 00:20:21.704 "auth": { 00:20:21.704 "state": "completed", 00:20:21.704 "digest": "sha384", 00:20:21.704 "dhgroup": "ffdhe4096" 00:20:21.704 } 00:20:21.704 } 00:20:21.704 ]' 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.704 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.962 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:21.962 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.531 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.789 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.790 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.048 00:20:23.048 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.048 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.048 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.307 { 00:20:23.307 "cntlid": 75, 00:20:23.307 "qid": 0, 00:20:23.307 "state": "enabled", 00:20:23.307 "thread": "nvmf_tgt_poll_group_000", 00:20:23.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.307 "listen_address": { 00:20:23.307 "trtype": "TCP", 00:20:23.307 "adrfam": "IPv4", 00:20:23.307 "traddr": "10.0.0.2", 00:20:23.307 "trsvcid": "4420" 00:20:23.307 }, 00:20:23.307 "peer_address": { 00:20:23.307 "trtype": "TCP", 00:20:23.307 "adrfam": "IPv4", 00:20:23.307 "traddr": "10.0.0.1", 00:20:23.307 "trsvcid": "44464" 00:20:23.307 }, 00:20:23.307 "auth": { 00:20:23.307 "state": "completed", 00:20:23.307 "digest": "sha384", 00:20:23.307 "dhgroup": "ffdhe4096" 00:20:23.307 } 00:20:23.307 } 00:20:23.307 ]' 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.307 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.565 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:23.566 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.133 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.391 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.392 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.650 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.650 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.909 { 00:20:24.909 "cntlid": 77, 00:20:24.909 "qid": 0, 00:20:24.909 "state": "enabled", 00:20:24.909 "thread": "nvmf_tgt_poll_group_000", 00:20:24.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.909 "listen_address": { 00:20:24.909 "trtype": "TCP", 00:20:24.909 "adrfam": "IPv4", 00:20:24.909 "traddr": "10.0.0.2", 00:20:24.909 "trsvcid": "4420" 00:20:24.909 }, 00:20:24.909 "peer_address": { 00:20:24.909 "trtype": "TCP", 00:20:24.909 "adrfam": "IPv4", 00:20:24.909 "traddr": "10.0.0.1", 00:20:24.909 "trsvcid": "53786" 00:20:24.909 }, 00:20:24.909 "auth": { 00:20:24.909 "state": "completed", 00:20:24.909 "digest": "sha384", 00:20:24.909 "dhgroup": "ffdhe4096" 00:20:24.909 } 00:20:24.909 } 00:20:24.909 ]' 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.909 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.167 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:25.167 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.734 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.993 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.993 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.993 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.993 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.993 00:20:26.252 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.252 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.252 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.252 { 00:20:26.252 "cntlid": 79, 00:20:26.252 "qid": 0, 00:20:26.252 "state": "enabled", 00:20:26.252 "thread": "nvmf_tgt_poll_group_000", 00:20:26.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.252 "listen_address": { 00:20:26.252 "trtype": "TCP", 00:20:26.252 "adrfam": "IPv4", 00:20:26.252 "traddr": "10.0.0.2", 00:20:26.252 "trsvcid": "4420" 00:20:26.252 }, 00:20:26.252 "peer_address": { 00:20:26.252 "trtype": "TCP", 00:20:26.252 "adrfam": "IPv4", 00:20:26.252 "traddr": "10.0.0.1", 00:20:26.252 "trsvcid": "53810" 00:20:26.252 }, 00:20:26.252 "auth": { 00:20:26.252 "state": "completed", 00:20:26.252 "digest": "sha384", 00:20:26.252 "dhgroup": "ffdhe4096" 00:20:26.252 } 00:20:26.252 } 00:20:26.252 ]' 00:20:26.252 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.511 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.770 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:26.770 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:27.342 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.342 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.911 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.911 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.911 { 00:20:27.911 "cntlid": 81, 00:20:27.911 "qid": 0, 00:20:27.911 "state": "enabled", 00:20:27.911 "thread": "nvmf_tgt_poll_group_000", 00:20:27.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.911 "listen_address": { 00:20:27.911 "trtype": "TCP", 00:20:27.911 "adrfam": "IPv4", 00:20:27.912 "traddr": "10.0.0.2", 00:20:27.912 "trsvcid": "4420" 00:20:27.912 }, 00:20:27.912 "peer_address": { 00:20:27.912 "trtype": "TCP", 00:20:27.912 "adrfam": "IPv4", 00:20:27.912 "traddr": "10.0.0.1", 00:20:27.912 "trsvcid": "53830" 00:20:27.912 }, 00:20:27.912 "auth": { 00:20:27.912 "state": "completed", 00:20:27.912 "digest": "sha384", 00:20:27.912 "dhgroup": "ffdhe6144" 00:20:27.912 } 00:20:27.912 } 00:20:27.912 ]' 00:20:27.912 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.912 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.912 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.170 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.170 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.170 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.170 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.170 13:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.170 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:28.170 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:28.736 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.995 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.996 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.996 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.996 13:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.563 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.563 { 00:20:29.563 "cntlid": 83, 00:20:29.563 "qid": 0, 00:20:29.563 "state": "enabled", 00:20:29.563 "thread": "nvmf_tgt_poll_group_000", 00:20:29.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.563 "listen_address": { 00:20:29.563 "trtype": "TCP", 00:20:29.563 "adrfam": "IPv4", 00:20:29.563 "traddr": "10.0.0.2", 00:20:29.563 "trsvcid": "4420" 00:20:29.563 }, 00:20:29.563 "peer_address": { 00:20:29.563 "trtype": "TCP", 00:20:29.563 "adrfam": "IPv4", 00:20:29.563 "traddr": "10.0.0.1", 00:20:29.563 "trsvcid": "53854" 00:20:29.563 }, 00:20:29.563 "auth": { 00:20:29.563 "state": "completed", 00:20:29.563 "digest": "sha384", 00:20:29.563 "dhgroup": "ffdhe6144" 00:20:29.563 } 00:20:29.563 } 00:20:29.563 ]' 00:20:29.563 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.822 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.080 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:30.080 13:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.647 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.215 00:20:31.215 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.215 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.215 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.215 { 00:20:31.215 "cntlid": 85, 00:20:31.215 "qid": 0, 00:20:31.215 "state": "enabled", 00:20:31.215 "thread": "nvmf_tgt_poll_group_000", 00:20:31.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.215 "listen_address": { 00:20:31.215 "trtype": "TCP", 00:20:31.215 "adrfam": "IPv4", 00:20:31.215 "traddr": "10.0.0.2", 00:20:31.215 "trsvcid": "4420" 00:20:31.215 }, 00:20:31.215 "peer_address": { 00:20:31.215 "trtype": "TCP", 00:20:31.215 "adrfam": "IPv4", 00:20:31.215 "traddr": "10.0.0.1", 00:20:31.215 "trsvcid": "53878" 00:20:31.215 }, 00:20:31.215 "auth": { 00:20:31.215 "state": "completed", 00:20:31.215 "digest": "sha384", 00:20:31.215 "dhgroup": "ffdhe6144" 00:20:31.215 } 00:20:31.215 } 00:20:31.215 ]' 00:20:31.215 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.474 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.732 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:31.732 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.300 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.300 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.559 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.559 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.559 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.559 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.818 00:20:32.818 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.818 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.818 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.076 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.076 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.076 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.076 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.076 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.077 { 00:20:33.077 "cntlid": 87, 00:20:33.077 "qid": 0, 00:20:33.077 "state": "enabled", 00:20:33.077 "thread": "nvmf_tgt_poll_group_000", 00:20:33.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.077 "listen_address": { 00:20:33.077 "trtype": "TCP", 00:20:33.077 "adrfam": "IPv4", 00:20:33.077 "traddr": "10.0.0.2", 00:20:33.077 "trsvcid": "4420" 00:20:33.077 }, 00:20:33.077 "peer_address": { 00:20:33.077 "trtype": "TCP", 00:20:33.077 "adrfam": "IPv4", 00:20:33.077 "traddr": "10.0.0.1", 00:20:33.077 "trsvcid": "53892" 00:20:33.077 }, 00:20:33.077 "auth": { 00:20:33.077 "state": "completed", 00:20:33.077 "digest": "sha384", 00:20:33.077 "dhgroup": "ffdhe6144" 00:20:33.077 } 00:20:33.077 } 00:20:33.077 ]' 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.077 13:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.336 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:33.336 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.903 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.162 13:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.729 00:20:34.729 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.729 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.729 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.730 { 00:20:34.730 "cntlid": 89, 00:20:34.730 "qid": 0, 00:20:34.730 "state": "enabled", 00:20:34.730 "thread": "nvmf_tgt_poll_group_000", 00:20:34.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.730 "listen_address": { 00:20:34.730 "trtype": "TCP", 00:20:34.730 "adrfam": "IPv4", 00:20:34.730 "traddr": "10.0.0.2", 00:20:34.730 "trsvcid": "4420" 00:20:34.730 }, 00:20:34.730 "peer_address": { 00:20:34.730 "trtype": "TCP", 00:20:34.730 "adrfam": "IPv4", 00:20:34.730 "traddr": "10.0.0.1", 00:20:34.730 "trsvcid": "51978" 00:20:34.730 }, 00:20:34.730 "auth": { 00:20:34.730 "state": "completed", 00:20:34.730 "digest": "sha384", 00:20:34.730 "dhgroup": "ffdhe8192" 00:20:34.730 } 00:20:34.730 } 00:20:34.730 ]' 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.730 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.988 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.988 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.988 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.988 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.988 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.247 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:35.247 13:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 13:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.382 00:20:36.382 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.382 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.382 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.641 { 00:20:36.641 "cntlid": 91, 00:20:36.641 "qid": 0, 00:20:36.641 "state": "enabled", 00:20:36.641 "thread": "nvmf_tgt_poll_group_000", 00:20:36.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.641 "listen_address": { 00:20:36.641 "trtype": "TCP", 00:20:36.641 "adrfam": "IPv4", 00:20:36.641 "traddr": "10.0.0.2", 00:20:36.641 "trsvcid": "4420" 00:20:36.641 }, 00:20:36.641 "peer_address": { 00:20:36.641 "trtype": "TCP", 00:20:36.641 "adrfam": "IPv4", 00:20:36.641 "traddr": "10.0.0.1", 00:20:36.641 "trsvcid": "51996" 00:20:36.641 }, 00:20:36.641 "auth": { 00:20:36.641 "state": "completed", 00:20:36.641 "digest": "sha384", 00:20:36.641 "dhgroup": "ffdhe8192" 00:20:36.641 } 00:20:36.641 } 00:20:36.641 ]' 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.641 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.899 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:36.899 13:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.466 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.724 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.291 00:20:38.291 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.291 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.291 13:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.291 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.550 { 00:20:38.550 "cntlid": 93, 00:20:38.550 "qid": 0, 00:20:38.550 "state": "enabled", 00:20:38.550 "thread": "nvmf_tgt_poll_group_000", 00:20:38.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.550 "listen_address": { 00:20:38.550 "trtype": "TCP", 00:20:38.550 "adrfam": "IPv4", 00:20:38.550 "traddr": "10.0.0.2", 00:20:38.550 "trsvcid": "4420" 00:20:38.550 }, 00:20:38.550 "peer_address": { 00:20:38.550 "trtype": "TCP", 00:20:38.550 "adrfam": "IPv4", 00:20:38.550 "traddr": "10.0.0.1", 00:20:38.550 "trsvcid": "52016" 00:20:38.550 }, 00:20:38.550 "auth": { 00:20:38.550 "state": "completed", 00:20:38.550 "digest": "sha384", 00:20:38.550 "dhgroup": "ffdhe8192" 00:20:38.550 } 00:20:38.550 } 00:20:38.550 ]' 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.550 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.809 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:38.809 13:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.376 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.635 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.893 00:20:40.152 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.152 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.152 13:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.152 { 00:20:40.152 "cntlid": 95, 00:20:40.152 "qid": 0, 00:20:40.152 "state": "enabled", 00:20:40.152 "thread": "nvmf_tgt_poll_group_000", 00:20:40.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.152 "listen_address": { 00:20:40.152 "trtype": "TCP", 00:20:40.152 "adrfam": "IPv4", 00:20:40.152 "traddr": "10.0.0.2", 00:20:40.152 "trsvcid": "4420" 00:20:40.152 }, 00:20:40.152 "peer_address": { 00:20:40.152 "trtype": "TCP", 00:20:40.152 "adrfam": "IPv4", 00:20:40.152 "traddr": "10.0.0.1", 00:20:40.152 "trsvcid": "52042" 00:20:40.152 }, 00:20:40.152 "auth": { 00:20:40.152 "state": "completed", 00:20:40.152 "digest": "sha384", 00:20:40.152 "dhgroup": "ffdhe8192" 00:20:40.152 } 00:20:40.152 } 00:20:40.152 ]' 00:20:40.152 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.411 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.670 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:40.670 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.237 13:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.237 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.496 00:20:41.496 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.496 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.496 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.755 { 00:20:41.755 "cntlid": 97, 00:20:41.755 "qid": 0, 00:20:41.755 "state": "enabled", 00:20:41.755 "thread": "nvmf_tgt_poll_group_000", 00:20:41.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.755 "listen_address": { 00:20:41.755 "trtype": "TCP", 00:20:41.755 "adrfam": "IPv4", 00:20:41.755 "traddr": "10.0.0.2", 00:20:41.755 "trsvcid": "4420" 00:20:41.755 }, 00:20:41.755 "peer_address": { 00:20:41.755 "trtype": "TCP", 00:20:41.755 "adrfam": "IPv4", 00:20:41.755 "traddr": "10.0.0.1", 00:20:41.755 "trsvcid": "52070" 00:20:41.755 }, 00:20:41.755 "auth": { 00:20:41.755 "state": "completed", 00:20:41.755 "digest": "sha512", 00:20:41.755 "dhgroup": "null" 00:20:41.755 } 00:20:41.755 } 00:20:41.755 ]' 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.755 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:42.014 13:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:42.580 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.580 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.581 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.839 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.098 00:20:43.098 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.098 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.098 13:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.356 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.356 { 00:20:43.356 "cntlid": 99, 00:20:43.356 "qid": 0, 00:20:43.356 "state": "enabled", 00:20:43.356 "thread": "nvmf_tgt_poll_group_000", 00:20:43.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.356 "listen_address": { 00:20:43.357 "trtype": "TCP", 00:20:43.357 "adrfam": "IPv4", 00:20:43.357 "traddr": "10.0.0.2", 00:20:43.357 "trsvcid": "4420" 00:20:43.357 }, 00:20:43.357 "peer_address": { 00:20:43.357 "trtype": "TCP", 00:20:43.357 "adrfam": "IPv4", 00:20:43.357 "traddr": "10.0.0.1", 00:20:43.357 "trsvcid": "52090" 00:20:43.357 }, 00:20:43.357 "auth": { 00:20:43.357 "state": "completed", 00:20:43.357 "digest": "sha512", 00:20:43.357 "dhgroup": "null" 00:20:43.357 } 00:20:43.357 } 00:20:43.357 ]' 00:20:43.357 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.357 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.357 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.357 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.357 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.615 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.615 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.615 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.615 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:43.615 13:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.183 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.442 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.701 00:20:44.701 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.701 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.701 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.960 { 00:20:44.960 "cntlid": 101, 00:20:44.960 "qid": 0, 00:20:44.960 "state": "enabled", 00:20:44.960 "thread": "nvmf_tgt_poll_group_000", 00:20:44.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.960 "listen_address": { 00:20:44.960 "trtype": "TCP", 00:20:44.960 "adrfam": "IPv4", 00:20:44.960 "traddr": "10.0.0.2", 00:20:44.960 "trsvcid": "4420" 00:20:44.960 }, 00:20:44.960 "peer_address": { 00:20:44.960 "trtype": "TCP", 00:20:44.960 "adrfam": "IPv4", 00:20:44.960 "traddr": "10.0.0.1", 00:20:44.960 "trsvcid": "47496" 00:20:44.960 }, 00:20:44.960 "auth": { 00:20:44.960 "state": "completed", 00:20:44.960 "digest": "sha512", 00:20:44.960 "dhgroup": "null" 00:20:44.960 } 00:20:44.960 } 00:20:44.960 ]' 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.960 13:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.218 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:45.218 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.923 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.183 13:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.183 00:20:46.183 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.183 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.183 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.442 { 00:20:46.442 "cntlid": 103, 00:20:46.442 "qid": 0, 00:20:46.442 "state": "enabled", 00:20:46.442 "thread": "nvmf_tgt_poll_group_000", 00:20:46.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.442 "listen_address": { 00:20:46.442 "trtype": "TCP", 00:20:46.442 "adrfam": "IPv4", 00:20:46.442 "traddr": "10.0.0.2", 00:20:46.442 "trsvcid": "4420" 00:20:46.442 }, 00:20:46.442 "peer_address": { 00:20:46.442 "trtype": "TCP", 00:20:46.442 "adrfam": "IPv4", 00:20:46.442 "traddr": "10.0.0.1", 00:20:46.442 "trsvcid": "47522" 00:20:46.442 }, 00:20:46.442 "auth": { 00:20:46.442 "state": "completed", 00:20:46.442 "digest": "sha512", 00:20:46.442 "dhgroup": "null" 00:20:46.442 } 00:20:46.442 } 00:20:46.442 ]' 00:20:46.442 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.700 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.700 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.700 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.701 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.701 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.701 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.701 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.959 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:46.959 13:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.526 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.785 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.785 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.785 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.785 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.043 { 00:20:48.043 "cntlid": 105, 00:20:48.043 "qid": 0, 00:20:48.043 "state": "enabled", 00:20:48.043 "thread": "nvmf_tgt_poll_group_000", 00:20:48.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.043 "listen_address": { 00:20:48.043 "trtype": "TCP", 00:20:48.043 "adrfam": "IPv4", 00:20:48.043 "traddr": "10.0.0.2", 00:20:48.043 "trsvcid": "4420" 00:20:48.043 }, 00:20:48.043 "peer_address": { 00:20:48.043 "trtype": "TCP", 00:20:48.043 "adrfam": "IPv4", 00:20:48.043 "traddr": "10.0.0.1", 00:20:48.043 "trsvcid": "47554" 00:20:48.043 }, 00:20:48.043 "auth": { 00:20:48.043 "state": "completed", 00:20:48.043 "digest": "sha512", 00:20:48.043 "dhgroup": "ffdhe2048" 00:20:48.043 } 00:20:48.043 } 00:20:48.043 ]' 00:20:48.043 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.302 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.302 13:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.302 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.302 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.302 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.302 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.302 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.561 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:48.561 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.129 13:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.129 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.387 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.387 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.387 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.387 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.387 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.646 { 00:20:49.646 "cntlid": 107, 00:20:49.646 "qid": 0, 00:20:49.646 "state": "enabled", 00:20:49.646 "thread": "nvmf_tgt_poll_group_000", 00:20:49.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.646 "listen_address": { 00:20:49.646 "trtype": "TCP", 00:20:49.646 "adrfam": "IPv4", 00:20:49.646 "traddr": "10.0.0.2", 00:20:49.646 "trsvcid": "4420" 00:20:49.646 }, 00:20:49.646 "peer_address": { 00:20:49.646 "trtype": "TCP", 00:20:49.646 "adrfam": "IPv4", 00:20:49.646 "traddr": "10.0.0.1", 00:20:49.646 "trsvcid": "47594" 00:20:49.646 }, 00:20:49.646 "auth": { 00:20:49.646 "state": "completed", 00:20:49.646 "digest": "sha512", 00:20:49.646 "dhgroup": "ffdhe2048" 00:20:49.646 } 00:20:49.646 } 00:20:49.646 ]' 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.646 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.905 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.905 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.905 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.905 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.905 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.164 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:50.164 13:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.732 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.991 00:20:50.991 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.991 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.991 13:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.250 { 00:20:51.250 "cntlid": 109, 00:20:51.250 "qid": 0, 00:20:51.250 "state": "enabled", 00:20:51.250 "thread": "nvmf_tgt_poll_group_000", 00:20:51.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.250 "listen_address": { 00:20:51.250 "trtype": "TCP", 00:20:51.250 "adrfam": "IPv4", 00:20:51.250 "traddr": "10.0.0.2", 00:20:51.250 "trsvcid": "4420" 00:20:51.250 }, 00:20:51.250 "peer_address": { 00:20:51.250 "trtype": "TCP", 00:20:51.250 "adrfam": "IPv4", 00:20:51.250 "traddr": "10.0.0.1", 00:20:51.250 "trsvcid": "47624" 00:20:51.250 }, 00:20:51.250 "auth": { 00:20:51.250 "state": "completed", 00:20:51.250 "digest": "sha512", 00:20:51.250 "dhgroup": "ffdhe2048" 00:20:51.250 } 00:20:51.250 } 00:20:51.250 ]' 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.250 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.509 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.509 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.509 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.509 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:51.509 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.076 13:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.336 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.595 00:20:52.595 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.595 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.595 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.853 { 00:20:52.853 "cntlid": 111, 00:20:52.853 "qid": 0, 00:20:52.853 "state": "enabled", 00:20:52.853 "thread": "nvmf_tgt_poll_group_000", 00:20:52.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.853 "listen_address": { 00:20:52.853 "trtype": "TCP", 00:20:52.853 "adrfam": "IPv4", 00:20:52.853 "traddr": "10.0.0.2", 00:20:52.853 "trsvcid": "4420" 00:20:52.853 }, 00:20:52.853 "peer_address": { 00:20:52.853 "trtype": "TCP", 00:20:52.853 "adrfam": "IPv4", 00:20:52.853 "traddr": "10.0.0.1", 00:20:52.853 "trsvcid": "47640" 00:20:52.853 }, 00:20:52.853 "auth": { 00:20:52.853 "state": "completed", 00:20:52.853 "digest": "sha512", 00:20:52.853 "dhgroup": "ffdhe2048" 00:20:52.853 } 00:20:52.853 } 00:20:52.853 ]' 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.853 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.112 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.112 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.112 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.112 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:53.112 13:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.680 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.939 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.940 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.940 13:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.198 00:20:54.198 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.198 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.198 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.457 { 00:20:54.457 "cntlid": 113, 00:20:54.457 "qid": 0, 00:20:54.457 "state": "enabled", 00:20:54.457 "thread": "nvmf_tgt_poll_group_000", 00:20:54.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.457 "listen_address": { 00:20:54.457 "trtype": "TCP", 00:20:54.457 "adrfam": "IPv4", 00:20:54.457 "traddr": "10.0.0.2", 00:20:54.457 "trsvcid": "4420" 00:20:54.457 }, 00:20:54.457 "peer_address": { 00:20:54.457 "trtype": "TCP", 00:20:54.457 "adrfam": "IPv4", 00:20:54.457 "traddr": "10.0.0.1", 00:20:54.457 "trsvcid": "49078" 00:20:54.457 }, 00:20:54.457 "auth": { 00:20:54.457 "state": "completed", 00:20:54.457 "digest": "sha512", 00:20:54.457 "dhgroup": "ffdhe3072" 00:20:54.457 } 00:20:54.457 } 00:20:54.457 ]' 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.457 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.716 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.716 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.716 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.716 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:54.716 13:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.283 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.542 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.800 00:20:55.800 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.800 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.800 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.059 { 00:20:56.059 "cntlid": 115, 00:20:56.059 "qid": 0, 00:20:56.059 "state": "enabled", 00:20:56.059 "thread": "nvmf_tgt_poll_group_000", 00:20:56.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.059 "listen_address": { 00:20:56.059 "trtype": "TCP", 00:20:56.059 "adrfam": "IPv4", 00:20:56.059 "traddr": "10.0.0.2", 00:20:56.059 "trsvcid": "4420" 00:20:56.059 }, 00:20:56.059 "peer_address": { 00:20:56.059 "trtype": "TCP", 00:20:56.059 "adrfam": "IPv4", 00:20:56.059 "traddr": "10.0.0.1", 00:20:56.059 "trsvcid": "49110" 00:20:56.059 }, 00:20:56.059 "auth": { 00:20:56.059 "state": "completed", 00:20:56.059 "digest": "sha512", 00:20:56.059 "dhgroup": "ffdhe3072" 00:20:56.059 } 00:20:56.059 } 00:20:56.059 ]' 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.059 13:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.318 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:56.318 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.886 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.145 13:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.403 00:20:57.403 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.403 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.403 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.661 { 00:20:57.661 "cntlid": 117, 00:20:57.661 "qid": 0, 00:20:57.661 "state": "enabled", 00:20:57.661 "thread": "nvmf_tgt_poll_group_000", 00:20:57.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.661 "listen_address": { 00:20:57.661 "trtype": "TCP", 00:20:57.661 "adrfam": "IPv4", 00:20:57.661 "traddr": "10.0.0.2", 00:20:57.661 "trsvcid": "4420" 00:20:57.661 }, 00:20:57.661 "peer_address": { 00:20:57.661 "trtype": "TCP", 00:20:57.661 "adrfam": "IPv4", 00:20:57.661 "traddr": "10.0.0.1", 00:20:57.661 "trsvcid": "49136" 00:20:57.661 }, 00:20:57.661 "auth": { 00:20:57.661 "state": "completed", 00:20:57.661 "digest": "sha512", 00:20:57.661 "dhgroup": "ffdhe3072" 00:20:57.661 } 00:20:57.661 } 00:20:57.661 ]' 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.661 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.920 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:57.920 13:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:20:58.487 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.487 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.488 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.747 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.005 00:20:59.005 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.005 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.005 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.265 { 00:20:59.265 "cntlid": 119, 00:20:59.265 "qid": 0, 00:20:59.265 "state": "enabled", 00:20:59.265 "thread": "nvmf_tgt_poll_group_000", 00:20:59.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.265 "listen_address": { 00:20:59.265 "trtype": "TCP", 00:20:59.265 "adrfam": "IPv4", 00:20:59.265 "traddr": "10.0.0.2", 00:20:59.265 "trsvcid": "4420" 00:20:59.265 }, 00:20:59.265 "peer_address": { 00:20:59.265 "trtype": "TCP", 00:20:59.265 "adrfam": "IPv4", 00:20:59.265 "traddr": "10.0.0.1", 00:20:59.265 "trsvcid": "49152" 00:20:59.265 }, 00:20:59.265 "auth": { 00:20:59.265 "state": "completed", 00:20:59.265 "digest": "sha512", 00:20:59.265 "dhgroup": "ffdhe3072" 00:20:59.265 } 00:20:59.265 } 00:20:59.265 ]' 00:20:59.265 13:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.265 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.523 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:20:59.523 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.091 13:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.349 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.608 00:21:00.608 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.608 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.608 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.867 { 00:21:00.867 "cntlid": 121, 00:21:00.867 "qid": 0, 00:21:00.867 "state": "enabled", 00:21:00.867 "thread": "nvmf_tgt_poll_group_000", 00:21:00.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.867 "listen_address": { 00:21:00.867 "trtype": "TCP", 00:21:00.867 "adrfam": "IPv4", 00:21:00.867 "traddr": "10.0.0.2", 00:21:00.867 "trsvcid": "4420" 00:21:00.867 }, 00:21:00.867 "peer_address": { 00:21:00.867 "trtype": "TCP", 00:21:00.867 "adrfam": "IPv4", 00:21:00.867 "traddr": "10.0.0.1", 00:21:00.867 "trsvcid": "49186" 00:21:00.867 }, 00:21:00.867 "auth": { 00:21:00.867 "state": "completed", 00:21:00.867 "digest": "sha512", 00:21:00.867 "dhgroup": "ffdhe4096" 00:21:00.867 } 00:21:00.867 } 00:21:00.867 ]' 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.867 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.868 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.868 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.868 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.126 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:01.126 13:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.694 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.953 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.212 00:21:02.212 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.212 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.212 13:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.471 { 00:21:02.471 "cntlid": 123, 00:21:02.471 "qid": 0, 00:21:02.471 "state": "enabled", 00:21:02.471 "thread": "nvmf_tgt_poll_group_000", 00:21:02.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.471 "listen_address": { 00:21:02.471 "trtype": "TCP", 00:21:02.471 "adrfam": "IPv4", 00:21:02.471 "traddr": "10.0.0.2", 00:21:02.471 "trsvcid": "4420" 00:21:02.471 }, 00:21:02.471 "peer_address": { 00:21:02.471 "trtype": "TCP", 00:21:02.471 "adrfam": "IPv4", 00:21:02.471 "traddr": "10.0.0.1", 00:21:02.471 "trsvcid": "49214" 00:21:02.471 }, 00:21:02.471 "auth": { 00:21:02.471 "state": "completed", 00:21:02.471 "digest": "sha512", 00:21:02.471 "dhgroup": "ffdhe4096" 00:21:02.471 } 00:21:02.471 } 00:21:02.471 ]' 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.471 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.730 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:02.730 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.298 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.557 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.816 00:21:03.816 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.816 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.816 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.075 { 00:21:04.075 "cntlid": 125, 00:21:04.075 "qid": 0, 00:21:04.075 "state": "enabled", 00:21:04.075 "thread": "nvmf_tgt_poll_group_000", 00:21:04.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.075 "listen_address": { 00:21:04.075 "trtype": "TCP", 00:21:04.075 "adrfam": "IPv4", 00:21:04.075 "traddr": "10.0.0.2", 00:21:04.075 "trsvcid": "4420" 00:21:04.075 }, 00:21:04.075 "peer_address": { 00:21:04.075 "trtype": "TCP", 00:21:04.075 "adrfam": "IPv4", 00:21:04.075 "traddr": "10.0.0.1", 00:21:04.075 "trsvcid": "60880" 00:21:04.075 }, 00:21:04.075 "auth": { 00:21:04.075 "state": "completed", 00:21:04.075 "digest": "sha512", 00:21:04.075 "dhgroup": "ffdhe4096" 00:21:04.075 } 00:21:04.075 } 00:21:04.075 ]' 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.075 13:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.334 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:04.334 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.902 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.161 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.161 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.161 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.161 13:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.420 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.420 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.420 { 00:21:05.420 "cntlid": 127, 00:21:05.420 "qid": 0, 00:21:05.420 "state": "enabled", 00:21:05.420 "thread": "nvmf_tgt_poll_group_000", 00:21:05.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.420 "listen_address": { 00:21:05.420 "trtype": "TCP", 00:21:05.420 "adrfam": "IPv4", 00:21:05.421 "traddr": "10.0.0.2", 00:21:05.421 "trsvcid": "4420" 00:21:05.421 }, 00:21:05.421 "peer_address": { 00:21:05.421 "trtype": "TCP", 00:21:05.421 "adrfam": "IPv4", 00:21:05.421 "traddr": "10.0.0.1", 00:21:05.421 "trsvcid": "60924" 00:21:05.421 }, 00:21:05.421 "auth": { 00:21:05.421 "state": "completed", 00:21:05.421 "digest": "sha512", 00:21:05.421 "dhgroup": "ffdhe4096" 00:21:05.421 } 00:21:05.421 } 00:21:05.421 ]' 00:21:05.421 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.679 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.680 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.938 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:05.939 13:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.507 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.074 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.074 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.333 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.333 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.333 { 00:21:07.333 "cntlid": 129, 00:21:07.333 "qid": 0, 00:21:07.333 "state": "enabled", 00:21:07.333 "thread": "nvmf_tgt_poll_group_000", 00:21:07.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.333 "listen_address": { 00:21:07.333 "trtype": "TCP", 00:21:07.333 "adrfam": "IPv4", 00:21:07.333 "traddr": "10.0.0.2", 00:21:07.333 "trsvcid": "4420" 00:21:07.333 }, 00:21:07.333 "peer_address": { 00:21:07.333 "trtype": "TCP", 00:21:07.333 "adrfam": "IPv4", 00:21:07.333 "traddr": "10.0.0.1", 00:21:07.333 "trsvcid": "60956" 00:21:07.333 }, 00:21:07.333 "auth": { 00:21:07.333 "state": "completed", 00:21:07.333 "digest": "sha512", 00:21:07.333 "dhgroup": "ffdhe6144" 00:21:07.333 } 00:21:07.333 } 00:21:07.333 ]' 00:21:07.333 13:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.333 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.591 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:07.591 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.159 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.418 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.418 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.418 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.676 00:21:08.676 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.676 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.676 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.936 { 00:21:08.936 "cntlid": 131, 00:21:08.936 "qid": 0, 00:21:08.936 "state": "enabled", 00:21:08.936 "thread": "nvmf_tgt_poll_group_000", 00:21:08.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.936 "listen_address": { 00:21:08.936 "trtype": "TCP", 00:21:08.936 "adrfam": "IPv4", 00:21:08.936 "traddr": "10.0.0.2", 00:21:08.936 "trsvcid": "4420" 00:21:08.936 }, 00:21:08.936 "peer_address": { 00:21:08.936 "trtype": "TCP", 00:21:08.936 "adrfam": "IPv4", 00:21:08.936 "traddr": "10.0.0.1", 00:21:08.936 "trsvcid": "60980" 00:21:08.936 }, 00:21:08.936 "auth": { 00:21:08.936 "state": "completed", 00:21:08.936 "digest": "sha512", 00:21:08.936 "dhgroup": "ffdhe6144" 00:21:08.936 } 00:21:08.936 } 00:21:08.936 ]' 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.936 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.195 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:09.195 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.762 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.021 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.280 00:21:10.280 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.280 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.280 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.539 { 00:21:10.539 "cntlid": 133, 00:21:10.539 "qid": 0, 00:21:10.539 "state": "enabled", 00:21:10.539 "thread": "nvmf_tgt_poll_group_000", 00:21:10.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.539 "listen_address": { 00:21:10.539 "trtype": "TCP", 00:21:10.539 "adrfam": "IPv4", 00:21:10.539 "traddr": "10.0.0.2", 00:21:10.539 "trsvcid": "4420" 00:21:10.539 }, 00:21:10.539 "peer_address": { 00:21:10.539 "trtype": "TCP", 00:21:10.539 "adrfam": "IPv4", 00:21:10.539 "traddr": "10.0.0.1", 00:21:10.539 "trsvcid": "32770" 00:21:10.539 }, 00:21:10.539 "auth": { 00:21:10.539 "state": "completed", 00:21:10.539 "digest": "sha512", 00:21:10.539 "dhgroup": "ffdhe6144" 00:21:10.539 } 00:21:10.539 } 00:21:10.539 ]' 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.539 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.798 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:10.798 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.366 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.625 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.884 00:21:11.884 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.884 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.884 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.143 { 00:21:12.143 "cntlid": 135, 00:21:12.143 "qid": 0, 00:21:12.143 "state": "enabled", 00:21:12.143 "thread": "nvmf_tgt_poll_group_000", 00:21:12.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.143 "listen_address": { 00:21:12.143 "trtype": "TCP", 00:21:12.143 "adrfam": "IPv4", 00:21:12.143 "traddr": "10.0.0.2", 00:21:12.143 "trsvcid": "4420" 00:21:12.143 }, 00:21:12.143 "peer_address": { 00:21:12.143 "trtype": "TCP", 00:21:12.143 "adrfam": "IPv4", 00:21:12.143 "traddr": "10.0.0.1", 00:21:12.143 "trsvcid": "32808" 00:21:12.143 }, 00:21:12.143 "auth": { 00:21:12.143 "state": "completed", 00:21:12.143 "digest": "sha512", 00:21:12.143 "dhgroup": "ffdhe6144" 00:21:12.143 } 00:21:12.143 } 00:21:12.143 ]' 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.143 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.143 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.143 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.401 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.401 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.401 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.402 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:12.402 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.969 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.228 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.795 00:21:13.795 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.795 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.795 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.054 { 00:21:14.054 "cntlid": 137, 00:21:14.054 "qid": 0, 00:21:14.054 "state": "enabled", 00:21:14.054 "thread": "nvmf_tgt_poll_group_000", 00:21:14.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.054 "listen_address": { 00:21:14.054 "trtype": "TCP", 00:21:14.054 "adrfam": "IPv4", 00:21:14.054 "traddr": "10.0.0.2", 00:21:14.054 "trsvcid": "4420" 00:21:14.054 }, 00:21:14.054 "peer_address": { 00:21:14.054 "trtype": "TCP", 00:21:14.054 "adrfam": "IPv4", 00:21:14.054 "traddr": "10.0.0.1", 00:21:14.054 "trsvcid": "37306" 00:21:14.054 }, 00:21:14.054 "auth": { 00:21:14.054 "state": "completed", 00:21:14.054 "digest": "sha512", 00:21:14.054 "dhgroup": "ffdhe8192" 00:21:14.054 } 00:21:14.054 } 00:21:14.054 ]' 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.054 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.313 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:14.313 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.881 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.140 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.708 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.708 { 00:21:15.708 "cntlid": 139, 00:21:15.708 "qid": 0, 00:21:15.708 "state": "enabled", 00:21:15.708 "thread": "nvmf_tgt_poll_group_000", 00:21:15.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.708 "listen_address": { 00:21:15.708 "trtype": "TCP", 00:21:15.708 "adrfam": "IPv4", 00:21:15.708 "traddr": "10.0.0.2", 00:21:15.708 "trsvcid": "4420" 00:21:15.708 }, 00:21:15.708 "peer_address": { 00:21:15.708 "trtype": "TCP", 00:21:15.708 "adrfam": "IPv4", 00:21:15.708 "traddr": "10.0.0.1", 00:21:15.708 "trsvcid": "37336" 00:21:15.708 }, 00:21:15.708 "auth": { 00:21:15.708 "state": "completed", 00:21:15.708 "digest": "sha512", 00:21:15.708 "dhgroup": "ffdhe8192" 00:21:15.708 } 00:21:15.708 } 00:21:15.708 ]' 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.708 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:15.967 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: --dhchap-ctrl-secret DHHC-1:02:MzAzYmY3ZDNkNzU3ZTNkMjgwNjJmNzNmNjgwODhiMGMyYmEyMDRmMWI1MWM0ZmRkq8x5wA==: 00:21:16.535 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.535 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.535 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.535 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.794 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.361 00:21:17.361 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.361 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.361 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.619 { 00:21:17.619 "cntlid": 141, 00:21:17.619 "qid": 0, 00:21:17.619 "state": "enabled", 00:21:17.619 "thread": "nvmf_tgt_poll_group_000", 00:21:17.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.619 "listen_address": { 00:21:17.619 "trtype": "TCP", 00:21:17.619 "adrfam": "IPv4", 00:21:17.619 "traddr": "10.0.0.2", 00:21:17.619 "trsvcid": "4420" 00:21:17.619 }, 00:21:17.619 "peer_address": { 00:21:17.619 "trtype": "TCP", 00:21:17.619 "adrfam": "IPv4", 00:21:17.619 "traddr": "10.0.0.1", 00:21:17.619 "trsvcid": "37360" 00:21:17.619 }, 00:21:17.619 "auth": { 00:21:17.619 "state": "completed", 00:21:17.619 "digest": "sha512", 00:21:17.619 "dhgroup": "ffdhe8192" 00:21:17.619 } 00:21:17.619 } 00:21:17.619 ]' 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.619 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.877 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:17.877 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:01:ZTBlMDY0NjE5N2Y3MGFiNGMwYmUyOTk0NWYwMGFiOTZ55Y8M: 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.445 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.703 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.271 00:21:19.271 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.271 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.271 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.271 { 00:21:19.271 "cntlid": 143, 00:21:19.271 "qid": 0, 00:21:19.271 "state": "enabled", 00:21:19.271 "thread": "nvmf_tgt_poll_group_000", 00:21:19.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.271 "listen_address": { 00:21:19.271 "trtype": "TCP", 00:21:19.271 "adrfam": "IPv4", 00:21:19.271 "traddr": "10.0.0.2", 00:21:19.271 "trsvcid": "4420" 00:21:19.271 }, 00:21:19.271 "peer_address": { 00:21:19.271 "trtype": "TCP", 00:21:19.271 "adrfam": "IPv4", 00:21:19.271 "traddr": "10.0.0.1", 00:21:19.271 "trsvcid": "37386" 00:21:19.271 }, 00:21:19.271 "auth": { 00:21:19.271 "state": "completed", 00:21:19.271 "digest": "sha512", 00:21:19.271 "dhgroup": "ffdhe8192" 00:21:19.271 } 00:21:19.271 } 00:21:19.271 ]' 00:21:19.271 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.530 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.788 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:19.789 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.356 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.615 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.874 00:21:21.133 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.133 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.133 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.133 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.133 { 00:21:21.133 "cntlid": 145, 00:21:21.133 "qid": 0, 00:21:21.133 "state": "enabled", 00:21:21.133 "thread": "nvmf_tgt_poll_group_000", 00:21:21.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.133 "listen_address": { 00:21:21.133 "trtype": "TCP", 00:21:21.133 "adrfam": "IPv4", 00:21:21.133 "traddr": "10.0.0.2", 00:21:21.133 "trsvcid": "4420" 00:21:21.133 }, 00:21:21.133 "peer_address": { 00:21:21.133 "trtype": "TCP", 00:21:21.133 "adrfam": "IPv4", 00:21:21.133 "traddr": "10.0.0.1", 00:21:21.133 "trsvcid": "37418" 00:21:21.133 }, 00:21:21.133 "auth": { 00:21:21.133 "state": "completed", 00:21:21.133 "digest": "sha512", 00:21:21.133 "dhgroup": "ffdhe8192" 00:21:21.133 } 00:21:21.133 } 00:21:21.133 ]' 00:21:21.133 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.391 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.650 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:21.650 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGU4OTFiNWMwOGNiNjEzNjJhMjUzOWE1MWQyZTJkZjA2YjVmODE0ZTllYjc5ZjVkm4rP8A==: --dhchap-ctrl-secret DHHC-1:03:MWFjYmZiZmQ0YTBhYWUxZjRmYmEwZmEyOWQwYjE4MDdjMGM5NmFmYzgxNmM3ZjBjMjYxM2RjOWJmODY4NWU4ZI7y9FE=: 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:22.218 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:22.786 request: 00:21:22.786 { 00:21:22.786 "name": "nvme0", 00:21:22.786 "trtype": "tcp", 00:21:22.786 "traddr": "10.0.0.2", 00:21:22.786 "adrfam": "ipv4", 00:21:22.786 "trsvcid": "4420", 00:21:22.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.786 "prchk_reftag": false, 00:21:22.786 "prchk_guard": false, 00:21:22.786 "hdgst": false, 00:21:22.786 "ddgst": false, 00:21:22.786 "dhchap_key": "key2", 00:21:22.786 "allow_unrecognized_csi": false, 00:21:22.786 "method": "bdev_nvme_attach_controller", 00:21:22.786 "req_id": 1 00:21:22.786 } 00:21:22.786 Got JSON-RPC error response 00:21:22.786 response: 00:21:22.786 { 00:21:22.786 "code": -5, 00:21:22.786 "message": "Input/output error" 00:21:22.786 } 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.786 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:22.787 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:23.131 request: 00:21:23.131 { 00:21:23.131 "name": "nvme0", 00:21:23.131 "trtype": "tcp", 00:21:23.131 "traddr": "10.0.0.2", 00:21:23.131 "adrfam": "ipv4", 00:21:23.131 "trsvcid": "4420", 00:21:23.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.131 "prchk_reftag": false, 00:21:23.131 "prchk_guard": false, 00:21:23.131 "hdgst": false, 00:21:23.131 "ddgst": false, 00:21:23.131 "dhchap_key": "key1", 00:21:23.131 "dhchap_ctrlr_key": "ckey2", 00:21:23.131 "allow_unrecognized_csi": false, 00:21:23.131 "method": "bdev_nvme_attach_controller", 00:21:23.131 "req_id": 1 00:21:23.131 } 00:21:23.131 Got JSON-RPC error response 00:21:23.131 response: 00:21:23.131 { 00:21:23.131 "code": -5, 00:21:23.131 "message": "Input/output error" 00:21:23.131 } 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.131 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.700 request: 00:21:23.700 { 00:21:23.700 "name": "nvme0", 00:21:23.700 "trtype": "tcp", 00:21:23.700 "traddr": "10.0.0.2", 00:21:23.700 "adrfam": "ipv4", 00:21:23.700 "trsvcid": "4420", 00:21:23.700 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:23.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.700 "prchk_reftag": false, 00:21:23.700 "prchk_guard": false, 00:21:23.700 "hdgst": false, 00:21:23.700 "ddgst": false, 00:21:23.700 "dhchap_key": "key1", 00:21:23.700 "dhchap_ctrlr_key": "ckey1", 00:21:23.700 "allow_unrecognized_csi": false, 00:21:23.700 "method": "bdev_nvme_attach_controller", 00:21:23.700 "req_id": 1 00:21:23.700 } 00:21:23.700 Got JSON-RPC error response 00:21:23.700 response: 00:21:23.700 { 00:21:23.700 "code": -5, 00:21:23.700 "message": "Input/output error" 00:21:23.700 } 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 981585 ']' 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981585' 00:21:23.700 killing process with pid 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 981585 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1003264 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:23.700 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1003264 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003264 ']' 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.701 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1003264 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003264 ']' 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.960 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.219 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:24.219 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:24.219 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 null0 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Kti 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Zdw ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zdw 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kHM 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.G2E ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.G2E 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.07K 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.dJX ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dJX 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kp4 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.478 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.414 nvme0n1 00:21:25.414 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.414 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.414 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.414 { 00:21:25.414 "cntlid": 1, 00:21:25.414 "qid": 0, 00:21:25.414 "state": "enabled", 00:21:25.414 "thread": "nvmf_tgt_poll_group_000", 00:21:25.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.414 "listen_address": { 00:21:25.414 "trtype": "TCP", 00:21:25.414 "adrfam": "IPv4", 00:21:25.414 "traddr": "10.0.0.2", 00:21:25.414 "trsvcid": "4420" 00:21:25.414 }, 00:21:25.414 "peer_address": { 00:21:25.414 "trtype": "TCP", 00:21:25.414 "adrfam": "IPv4", 00:21:25.414 "traddr": "10.0.0.1", 00:21:25.414 "trsvcid": "58236" 00:21:25.414 }, 00:21:25.414 "auth": { 00:21:25.414 "state": "completed", 00:21:25.414 "digest": "sha512", 00:21:25.414 "dhgroup": "ffdhe8192" 00:21:25.414 } 00:21:25.414 } 00:21:25.414 ]' 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.414 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.415 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.673 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.673 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.673 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.673 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:25.673 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:26.240 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.240 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.240 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.240 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.499 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.758 request: 00:21:26.758 { 00:21:26.758 "name": "nvme0", 00:21:26.758 "trtype": "tcp", 00:21:26.758 "traddr": "10.0.0.2", 00:21:26.758 "adrfam": "ipv4", 00:21:26.758 "trsvcid": "4420", 00:21:26.758 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.758 "prchk_reftag": false, 00:21:26.758 "prchk_guard": false, 00:21:26.758 "hdgst": false, 00:21:26.758 "ddgst": false, 00:21:26.758 "dhchap_key": "key3", 00:21:26.758 "allow_unrecognized_csi": false, 00:21:26.758 "method": "bdev_nvme_attach_controller", 00:21:26.758 "req_id": 1 00:21:26.758 } 00:21:26.758 Got JSON-RPC error response 00:21:26.758 response: 00:21:26.758 { 00:21:26.758 "code": -5, 00:21:26.758 "message": "Input/output error" 00:21:26.758 } 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:26.758 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.017 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.017 request: 00:21:27.017 { 00:21:27.017 "name": "nvme0", 00:21:27.017 "trtype": "tcp", 00:21:27.017 "traddr": "10.0.0.2", 00:21:27.017 "adrfam": "ipv4", 00:21:27.017 "trsvcid": "4420", 00:21:27.017 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.017 "prchk_reftag": false, 00:21:27.017 "prchk_guard": false, 00:21:27.017 "hdgst": false, 00:21:27.017 "ddgst": false, 00:21:27.017 "dhchap_key": "key3", 00:21:27.017 "allow_unrecognized_csi": false, 00:21:27.017 "method": "bdev_nvme_attach_controller", 00:21:27.017 "req_id": 1 00:21:27.017 } 00:21:27.017 Got JSON-RPC error response 00:21:27.017 response: 00:21:27.017 { 00:21:27.017 "code": -5, 00:21:27.017 "message": "Input/output error" 00:21:27.017 } 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:27.276 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:27.277 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.277 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.277 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.277 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:27.843 request: 00:21:27.843 { 00:21:27.843 "name": "nvme0", 00:21:27.843 "trtype": "tcp", 00:21:27.843 "traddr": "10.0.0.2", 00:21:27.843 "adrfam": "ipv4", 00:21:27.843 "trsvcid": "4420", 00:21:27.843 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.844 "prchk_reftag": false, 00:21:27.844 "prchk_guard": false, 00:21:27.844 "hdgst": false, 00:21:27.844 "ddgst": false, 00:21:27.844 "dhchap_key": "key0", 00:21:27.844 "dhchap_ctrlr_key": "key1", 00:21:27.844 "allow_unrecognized_csi": false, 00:21:27.844 "method": "bdev_nvme_attach_controller", 00:21:27.844 "req_id": 1 00:21:27.844 } 00:21:27.844 Got JSON-RPC error response 00:21:27.844 response: 00:21:27.844 { 00:21:27.844 "code": -5, 00:21:27.844 "message": "Input/output error" 00:21:27.844 } 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:27.844 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:27.844 nvme0n1 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.102 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:28.361 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:29.297 nvme0n1 00:21:29.297 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:29.297 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:29.297 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:29.297 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.556 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.556 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:29.556 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: --dhchap-ctrl-secret DHHC-1:03:ZTFlYjZlZDAwMzAyOWZkOGNlY2U1MGRiNmVhMWQxZjZmMDFiNjdmZDE4NGU4NzVkNDUyODg1MTUzNTllYTdiMf4LNPc=: 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.123 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:30.382 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:30.643 request: 00:21:30.643 { 00:21:30.643 "name": "nvme0", 00:21:30.643 "trtype": "tcp", 00:21:30.643 "traddr": "10.0.0.2", 00:21:30.643 "adrfam": "ipv4", 00:21:30.643 "trsvcid": "4420", 00:21:30.643 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.643 "prchk_reftag": false, 00:21:30.643 "prchk_guard": false, 00:21:30.643 "hdgst": false, 00:21:30.643 "ddgst": false, 00:21:30.643 "dhchap_key": "key1", 00:21:30.643 "allow_unrecognized_csi": false, 00:21:30.643 "method": "bdev_nvme_attach_controller", 00:21:30.643 "req_id": 1 00:21:30.643 } 00:21:30.643 Got JSON-RPC error response 00:21:30.643 response: 00:21:30.643 { 00:21:30.643 "code": -5, 00:21:30.643 "message": "Input/output error" 00:21:30.643 } 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:30.643 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:31.579 nvme0n1 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.579 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:31.838 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:32.098 nvme0n1 00:21:32.098 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:32.098 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.098 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:32.356 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.356 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.356 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: '' 2s 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: ]] 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWMxMjQzYmQ5ZWY2YTgxYmRjZTgzMDk0M2E1M2ZmZTlnWLX9: 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:32.615 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: 2s 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: ]] 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2Y5YzIxM2UzZWIxM2RkNTc1NWY2M2IzZTMxNzdmYTM2MWNhYjE0N2FkZDU0ZTdl03XrMQ==: 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:34.518 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.051 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:37.317 nvme0n1 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.577 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:37.835 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:37.835 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:37.835 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:38.093 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:38.352 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:38.352 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.352 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:38.610 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.610 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.611 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:38.869 request: 00:21:38.869 { 00:21:38.869 "name": "nvme0", 00:21:38.869 "dhchap_key": "key1", 00:21:38.869 "dhchap_ctrlr_key": "key3", 00:21:38.869 "method": "bdev_nvme_set_keys", 00:21:38.869 "req_id": 1 00:21:38.869 } 00:21:38.869 Got JSON-RPC error response 00:21:38.869 response: 00:21:38.869 { 00:21:38.869 "code": -13, 00:21:38.869 "message": "Permission denied" 00:21:38.869 } 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:38.869 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.128 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:39.128 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:40.064 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:40.064 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:40.064 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:40.323 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:41.353 nvme0n1 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:41.353 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:41.612 request: 00:21:41.612 { 00:21:41.612 "name": "nvme0", 00:21:41.612 "dhchap_key": "key2", 00:21:41.612 "dhchap_ctrlr_key": "key0", 00:21:41.612 "method": "bdev_nvme_set_keys", 00:21:41.612 "req_id": 1 00:21:41.612 } 00:21:41.612 Got JSON-RPC error response 00:21:41.612 response: 00:21:41.612 { 00:21:41.612 "code": -13, 00:21:41.612 "message": "Permission denied" 00:21:41.612 } 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:41.612 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.870 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:41.870 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:42.805 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:42.805 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:42.805 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 981761 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 981761 ']' 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 981761 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 981761 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 981761' 00:21:43.063 killing process with pid 981761 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 981761 00:21:43.063 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 981761 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.321 rmmod nvme_tcp 00:21:43.321 rmmod nvme_fabrics 00:21:43.321 rmmod nvme_keyring 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1003264 ']' 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1003264 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1003264 ']' 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1003264 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1003264 00:21:43.321 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1003264' 00:21:43.579 killing process with pid 1003264 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1003264 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1003264 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.579 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Kti /tmp/spdk.key-sha256.kHM /tmp/spdk.key-sha384.07K /tmp/spdk.key-sha512.kp4 /tmp/spdk.key-sha512.Zdw /tmp/spdk.key-sha384.G2E /tmp/spdk.key-sha256.dJX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:46.114 00:21:46.114 real 2m31.544s 00:21:46.114 user 5m49.347s 00:21:46.114 sys 0m24.289s 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.114 ************************************ 00:21:46.114 END TEST nvmf_auth_target 00:21:46.114 ************************************ 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.114 ************************************ 00:21:46.114 START TEST nvmf_bdevio_no_huge 00:21:46.114 ************************************ 00:21:46.114 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:46.114 * Looking for test storage... 00:21:46.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.115 --rc genhtml_branch_coverage=1 00:21:46.115 --rc genhtml_function_coverage=1 00:21:46.115 --rc genhtml_legend=1 00:21:46.115 --rc geninfo_all_blocks=1 00:21:46.115 --rc geninfo_unexecuted_blocks=1 00:21:46.115 00:21:46.115 ' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.115 --rc genhtml_branch_coverage=1 00:21:46.115 --rc genhtml_function_coverage=1 00:21:46.115 --rc genhtml_legend=1 00:21:46.115 --rc geninfo_all_blocks=1 00:21:46.115 --rc geninfo_unexecuted_blocks=1 00:21:46.115 00:21:46.115 ' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.115 --rc genhtml_branch_coverage=1 00:21:46.115 --rc genhtml_function_coverage=1 00:21:46.115 --rc genhtml_legend=1 00:21:46.115 --rc geninfo_all_blocks=1 00:21:46.115 --rc geninfo_unexecuted_blocks=1 00:21:46.115 00:21:46.115 ' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:46.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.115 --rc genhtml_branch_coverage=1 00:21:46.115 --rc genhtml_function_coverage=1 00:21:46.115 --rc genhtml_legend=1 00:21:46.115 --rc geninfo_all_blocks=1 00:21:46.115 --rc geninfo_unexecuted_blocks=1 00:21:46.115 00:21:46.115 ' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.115 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.116 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:52.684 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:52.684 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:52.684 Found net devices under 0000:af:00.0: cvl_0_0 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.684 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:52.685 Found net devices under 0000:af:00.1: cvl_0_1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:21:52.685 00:21:52.685 --- 10.0.0.2 ping statistics --- 00:21:52.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.685 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:21:52.685 00:21:52.685 --- 10.0.0.1 ping statistics --- 00:21:52.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.685 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1009983 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1009983 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1009983 ']' 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 [2024-12-15 13:01:59.740954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:52.685 [2024-12-15 13:01:59.741011] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:52.685 [2024-12-15 13:01:59.821766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.685 [2024-12-15 13:01:59.857374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.685 [2024-12-15 13:01:59.857409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.685 [2024-12-15 13:01:59.857416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.685 [2024-12-15 13:01:59.857422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.685 [2024-12-15 13:01:59.857431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.685 [2024-12-15 13:01:59.858386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.685 [2024-12-15 13:01:59.858493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:21:52.685 [2024-12-15 13:01:59.858597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.685 [2024-12-15 13:01:59.858598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.685 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 [2024-12-15 13:02:00.007056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 Malloc0 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.685 [2024-12-15 13:02:00.052219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:52.685 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.686 { 00:21:52.686 "params": { 00:21:52.686 "name": "Nvme$subsystem", 00:21:52.686 "trtype": "$TEST_TRANSPORT", 00:21:52.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.686 "adrfam": "ipv4", 00:21:52.686 "trsvcid": "$NVMF_PORT", 00:21:52.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.686 "hdgst": ${hdgst:-false}, 00:21:52.686 "ddgst": ${ddgst:-false} 00:21:52.686 }, 00:21:52.686 "method": "bdev_nvme_attach_controller" 00:21:52.686 } 00:21:52.686 EOF 00:21:52.686 )") 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:52.686 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:52.686 "params": { 00:21:52.686 "name": "Nvme1", 00:21:52.686 "trtype": "tcp", 00:21:52.686 "traddr": "10.0.0.2", 00:21:52.686 "adrfam": "ipv4", 00:21:52.686 "trsvcid": "4420", 00:21:52.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.686 "hdgst": false, 00:21:52.686 "ddgst": false 00:21:52.686 }, 00:21:52.686 "method": "bdev_nvme_attach_controller" 00:21:52.686 }' 00:21:52.686 [2024-12-15 13:02:00.101575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:52.686 [2024-12-15 13:02:00.101620] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1010019 ] 00:21:52.686 [2024-12-15 13:02:00.180873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.686 [2024-12-15 13:02:00.218359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.686 [2024-12-15 13:02:00.218466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.686 [2024-12-15 13:02:00.218468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.686 I/O targets: 00:21:52.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:52.686 00:21:52.686 00:21:52.686 CUnit - A unit testing framework for C - Version 2.1-3 00:21:52.686 http://cunit.sourceforge.net/ 00:21:52.686 00:21:52.686 00:21:52.686 Suite: bdevio tests on: Nvme1n1 00:21:52.686 Test: blockdev write read block ...passed 00:21:52.947 Test: blockdev write zeroes read block ...passed 00:21:52.948 Test: blockdev write zeroes read no split ...passed 00:21:52.948 Test: blockdev write zeroes read split ...passed 00:21:52.948 Test: blockdev write zeroes read split partial ...passed 00:21:52.948 Test: blockdev reset ...[2024-12-15 13:02:00.627348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:52.948 [2024-12-15 13:02:00.627418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1366d00 (9): Bad file descriptor 00:21:52.948 [2024-12-15 13:02:00.769711] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:52.948 passed 00:21:52.948 Test: blockdev write read 8 blocks ...passed 00:21:52.948 Test: blockdev write read size > 128k ...passed 00:21:52.948 Test: blockdev write read invalid size ...passed 00:21:52.948 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.948 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.948 Test: blockdev write read max offset ...passed 00:21:53.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:53.206 Test: blockdev writev readv 8 blocks ...passed 00:21:53.206 Test: blockdev writev readv 30 x 1block ...passed 00:21:53.206 Test: blockdev writev readv block ...passed 00:21:53.206 Test: blockdev writev readv size > 128k ...passed 00:21:53.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:53.206 Test: blockdev comparev and writev ...[2024-12-15 13:02:00.938368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.206 [2024-12-15 13:02:00.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.206 [2024-12-15 13:02:00.938421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.206 [2024-12-15 13:02:00.938429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.206 [2024-12-15 13:02:00.938660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.206 [2024-12-15 13:02:00.938671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:00.938682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.207 [2024-12-15 13:02:00.938689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:00.938919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.207 [2024-12-15 13:02:00.938930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:00.938942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.207 [2024-12-15 13:02:00.938949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:00.939171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.207 [2024-12-15 13:02:00.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:00.939193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.207 [2024-12-15 13:02:00.939201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.207 passed 00:21:53.207 Test: blockdev nvme passthru rw ...passed 00:21:53.207 Test: blockdev nvme passthru vendor specific ...[2024-12-15 13:02:01.021093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.207 [2024-12-15 13:02:01.021109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:01.021210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.207 [2024-12-15 13:02:01.021220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:01.021320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.207 [2024-12-15 13:02:01.021330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.207 [2024-12-15 13:02:01.021433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.207 [2024-12-15 13:02:01.021444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.207 passed 00:21:53.207 Test: blockdev nvme admin passthru ...passed 00:21:53.207 Test: blockdev copy ...passed 00:21:53.207 00:21:53.207 Run Summary: Type Total Ran Passed Failed Inactive 00:21:53.207 suites 1 1 n/a 0 0 00:21:53.207 tests 23 23 23 0 0 00:21:53.207 asserts 152 152 152 0 n/a 00:21:53.207 00:21:53.207 Elapsed time = 1.166 seconds 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.465 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.465 rmmod nvme_tcp 00:21:53.724 rmmod nvme_fabrics 00:21:53.724 rmmod nvme_keyring 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1009983 ']' 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1009983 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1009983 ']' 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1009983 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1009983 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1009983' 00:21:53.724 killing process with pid 1009983 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1009983 00:21:53.724 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1009983 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.983 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.517 00:21:56.517 real 0m10.300s 00:21:56.517 user 0m11.399s 00:21:56.517 sys 0m5.302s 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:56.517 ************************************ 00:21:56.517 END TEST nvmf_bdevio_no_huge 00:21:56.517 ************************************ 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.517 ************************************ 00:21:56.517 START TEST nvmf_tls 00:21:56.517 ************************************ 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:56.517 * Looking for test storage... 00:21:56.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.517 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.517 --rc genhtml_branch_coverage=1 00:21:56.517 --rc genhtml_function_coverage=1 00:21:56.517 --rc genhtml_legend=1 00:21:56.517 --rc geninfo_all_blocks=1 00:21:56.517 --rc geninfo_unexecuted_blocks=1 00:21:56.517 00:21:56.517 ' 00:21:56.517 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.517 --rc genhtml_branch_coverage=1 00:21:56.517 --rc genhtml_function_coverage=1 00:21:56.517 --rc genhtml_legend=1 00:21:56.517 --rc geninfo_all_blocks=1 00:21:56.517 --rc geninfo_unexecuted_blocks=1 00:21:56.517 00:21:56.517 ' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.518 --rc genhtml_branch_coverage=1 00:21:56.518 --rc genhtml_function_coverage=1 00:21:56.518 --rc genhtml_legend=1 00:21:56.518 --rc geninfo_all_blocks=1 00:21:56.518 --rc geninfo_unexecuted_blocks=1 00:21:56.518 00:21:56.518 ' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.518 --rc genhtml_branch_coverage=1 00:21:56.518 --rc genhtml_function_coverage=1 00:21:56.518 --rc genhtml_legend=1 00:21:56.518 --rc geninfo_all_blocks=1 00:21:56.518 --rc geninfo_unexecuted_blocks=1 00:21:56.518 00:21:56.518 ' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.518 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.087 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:03.088 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:03.088 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:03.088 Found net devices under 0000:af:00.0: cvl_0_0 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:03.088 Found net devices under 0000:af:00.1: cvl_0_1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:22:03.088 00:22:03.088 --- 10.0.0.2 ping statistics --- 00:22:03.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.088 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:03.088 00:22:03.088 --- 10.0.0.1 ping statistics --- 00:22:03.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.088 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1013718 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1013718 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1013718 ']' 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.088 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.088 [2024-12-15 13:02:10.044007] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:03.088 [2024-12-15 13:02:10.044058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.088 [2024-12-15 13:02:10.125603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.088 [2024-12-15 13:02:10.146885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.088 [2024-12-15 13:02:10.146923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.088 [2024-12-15 13:02:10.146931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.088 [2024-12-15 13:02:10.146938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.088 [2024-12-15 13:02:10.146945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.088 [2024-12-15 13:02:10.147420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.088 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.088 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:03.088 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:03.089 true 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:03.089 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:03.347 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:03.347 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:03.347 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.347 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:03.606 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:03.606 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:03.606 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.606 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:03.865 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:03.865 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:03.865 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:03.865 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.865 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:04.124 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:04.124 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:04.124 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:04.383 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:04.383 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0F8YKkzKau 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.OAY8ArJkcc 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0F8YKkzKau 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.OAY8ArJkcc 00:22:04.642 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:04.901 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:05.160 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0F8YKkzKau 00:22:05.160 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0F8YKkzKau 00:22:05.160 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.160 [2024-12-15 13:02:13.029091] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.160 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.419 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.678 [2024-12-15 13:02:13.365929] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.678 [2024-12-15 13:02:13.366118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.678 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.678 malloc0 00:22:05.678 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.937 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0F8YKkzKau 00:22:06.195 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:06.453 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0F8YKkzKau 00:22:16.444 Initializing NVMe Controllers 00:22:16.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.444 Initialization complete. Launching workers. 00:22:16.444 ======================================================== 00:22:16.444 Latency(us) 00:22:16.444 Device Information : IOPS MiB/s Average min max 00:22:16.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17003.25 66.42 3764.08 759.44 5675.01 00:22:16.444 ======================================================== 00:22:16.444 Total : 17003.25 66.42 3764.08 759.44 5675.01 00:22:16.444 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0F8YKkzKau 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0F8YKkzKau 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1016153 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1016153 /var/tmp/bdevperf.sock 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1016153 ']' 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.444 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.444 [2024-12-15 13:02:24.271957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:16.444 [2024-12-15 13:02:24.272004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016153 ] 00:22:16.444 [2024-12-15 13:02:24.345832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.703 [2024-12-15 13:02:24.368233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.703 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.703 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:16.703 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0F8YKkzKau 00:22:16.962 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:16.962 [2024-12-15 13:02:24.820092] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.220 TLSTESTn1 00:22:17.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:17.220 Running I/O for 10 seconds... 00:22:19.552 5488.00 IOPS, 21.44 MiB/s [2024-12-15T12:02:28.047Z] 5379.00 IOPS, 21.01 MiB/s [2024-12-15T12:02:29.037Z] 5472.00 IOPS, 21.38 MiB/s [2024-12-15T12:02:30.414Z] 5504.00 IOPS, 21.50 MiB/s [2024-12-15T12:02:31.352Z] 5533.40 IOPS, 21.61 MiB/s [2024-12-15T12:02:32.291Z] 5532.00 IOPS, 21.61 MiB/s [2024-12-15T12:02:33.231Z] 5540.71 IOPS, 21.64 MiB/s [2024-12-15T12:02:34.169Z] 5546.88 IOPS, 21.67 MiB/s [2024-12-15T12:02:35.108Z] 5569.00 IOPS, 21.75 MiB/s [2024-12-15T12:02:35.108Z] 5558.20 IOPS, 21.71 MiB/s 00:22:27.201 Latency(us) 00:22:27.201 [2024-12-15T12:02:35.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.201 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.201 Verification LBA range: start 0x0 length 0x2000 00:22:27.201 TLSTESTn1 : 10.01 5563.63 21.73 0.00 0.00 22973.30 5055.63 23093.64 00:22:27.201 [2024-12-15T12:02:35.108Z] =================================================================================================================== 00:22:27.201 [2024-12-15T12:02:35.108Z] Total : 5563.63 21.73 0.00 0.00 22973.30 5055.63 23093.64 00:22:27.201 { 00:22:27.201 "results": [ 00:22:27.201 { 00:22:27.201 "job": "TLSTESTn1", 00:22:27.201 "core_mask": "0x4", 00:22:27.201 "workload": "verify", 00:22:27.201 "status": "finished", 00:22:27.201 "verify_range": { 00:22:27.201 "start": 0, 00:22:27.201 "length": 8192 00:22:27.201 }, 00:22:27.201 "queue_depth": 128, 00:22:27.201 "io_size": 4096, 00:22:27.201 "runtime": 10.01306, 00:22:27.201 "iops": 5563.633894134261, 00:22:27.201 "mibps": 21.732944898961957, 00:22:27.201 "io_failed": 0, 00:22:27.201 "io_timeout": 0, 00:22:27.201 "avg_latency_us": 22973.29782117791, 00:22:27.201 "min_latency_us": 5055.634285714285, 00:22:27.201 "max_latency_us": 23093.638095238097 00:22:27.201 } 00:22:27.201 ], 00:22:27.201 "core_count": 1 00:22:27.201 } 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1016153 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1016153 ']' 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1016153 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.201 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016153 00:22:27.460 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:27.460 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:27.460 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016153' 00:22:27.460 killing process with pid 1016153 00:22:27.460 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1016153 00:22:27.460 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.460 00:22:27.461 Latency(us) 00:22:27.461 [2024-12-15T12:02:35.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.461 [2024-12-15T12:02:35.368Z] =================================================================================================================== 00:22:27.461 [2024-12-15T12:02:35.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1016153 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OAY8ArJkcc 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OAY8ArJkcc 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OAY8ArJkcc 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OAY8ArJkcc 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1017816 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1017816 /var/tmp/bdevperf.sock 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1017816 ']' 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.461 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.461 [2024-12-15 13:02:35.323357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:27.461 [2024-12-15 13:02:35.323401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017816 ] 00:22:27.720 [2024-12-15 13:02:35.387384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.720 [2024-12-15 13:02:35.409658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.720 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.720 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:27.720 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OAY8ArJkcc 00:22:27.979 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:27.979 [2024-12-15 13:02:35.849519] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.979 [2024-12-15 13:02:35.854969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.979 [2024-12-15 13:02:35.855744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f70c0 (107): Transport endpoint is not connected 00:22:27.979 [2024-12-15 13:02:35.856739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f70c0 (9): Bad file descriptor 00:22:27.979 [2024-12-15 13:02:35.857740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:27.979 [2024-12-15 13:02:35.857753] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.979 [2024-12-15 13:02:35.857760] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:27.979 [2024-12-15 13:02:35.857768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:27.979 request: 00:22:27.979 { 00:22:27.979 "name": "TLSTEST", 00:22:27.979 "trtype": "tcp", 00:22:27.979 "traddr": "10.0.0.2", 00:22:27.979 "adrfam": "ipv4", 00:22:27.979 "trsvcid": "4420", 00:22:27.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.979 "prchk_reftag": false, 00:22:27.979 "prchk_guard": false, 00:22:27.979 "hdgst": false, 00:22:27.979 "ddgst": false, 00:22:27.979 "psk": "key0", 00:22:27.979 "allow_unrecognized_csi": false, 00:22:27.979 "method": "bdev_nvme_attach_controller", 00:22:27.979 "req_id": 1 00:22:27.979 } 00:22:27.979 Got JSON-RPC error response 00:22:27.979 response: 00:22:27.979 { 00:22:27.980 "code": -5, 00:22:27.980 "message": "Input/output error" 00:22:27.980 } 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1017816 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1017816 ']' 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1017816 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017816 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017816' 00:22:28.239 killing process with pid 1017816 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1017816 00:22:28.239 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.239 00:22:28.239 Latency(us) 00:22:28.239 [2024-12-15T12:02:36.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.239 [2024-12-15T12:02:36.146Z] =================================================================================================================== 00:22:28.239 [2024-12-15T12:02:36.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.239 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1017816 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0F8YKkzKau 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0F8YKkzKau 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.239 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0F8YKkzKau 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0F8YKkzKau 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018003 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018003 /var/tmp/bdevperf.sock 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018003 ']' 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.240 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 [2024-12-15 13:02:36.138458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:28.240 [2024-12-15 13:02:36.138504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018003 ] 00:22:28.499 [2024-12-15 13:02:36.210008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.499 [2024-12-15 13:02:36.229254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.499 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.499 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:28.499 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0F8YKkzKau 00:22:28.759 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:29.019 [2024-12-15 13:02:36.692097] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.019 [2024-12-15 13:02:36.697026] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:29.019 [2024-12-15 13:02:36.697049] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:29.019 [2024-12-15 13:02:36.697087] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:29.019 [2024-12-15 13:02:36.697352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa790c0 (107): Transport endpoint is not connected 00:22:29.019 [2024-12-15 13:02:36.698343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa790c0 (9): Bad file descriptor 00:22:29.019 [2024-12-15 13:02:36.699345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:29.019 [2024-12-15 13:02:36.699356] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:29.019 [2024-12-15 13:02:36.699364] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:29.019 [2024-12-15 13:02:36.699373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:29.019 request: 00:22:29.019 { 00:22:29.019 "name": "TLSTEST", 00:22:29.019 "trtype": "tcp", 00:22:29.019 "traddr": "10.0.0.2", 00:22:29.019 "adrfam": "ipv4", 00:22:29.019 "trsvcid": "4420", 00:22:29.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.019 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.019 "prchk_reftag": false, 00:22:29.019 "prchk_guard": false, 00:22:29.019 "hdgst": false, 00:22:29.019 "ddgst": false, 00:22:29.019 "psk": "key0", 00:22:29.019 "allow_unrecognized_csi": false, 00:22:29.019 "method": "bdev_nvme_attach_controller", 00:22:29.019 "req_id": 1 00:22:29.019 } 00:22:29.019 Got JSON-RPC error response 00:22:29.019 response: 00:22:29.019 { 00:22:29.019 "code": -5, 00:22:29.019 "message": "Input/output error" 00:22:29.019 } 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018003 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018003 ']' 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018003 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018003 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018003' 00:22:29.019 killing process with pid 1018003 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018003 00:22:29.019 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.019 00:22:29.019 Latency(us) 00:22:29.019 [2024-12-15T12:02:36.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.019 [2024-12-15T12:02:36.926Z] =================================================================================================================== 00:22:29.019 [2024-12-15T12:02:36.926Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018003 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0F8YKkzKau 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0F8YKkzKau 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0F8YKkzKau 00:22:29.019 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0F8YKkzKau 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018204 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018204 /var/tmp/bdevperf.sock 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018204 ']' 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.279 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.279 [2024-12-15 13:02:36.969960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:29.279 [2024-12-15 13:02:36.970011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018204 ] 00:22:29.279 [2024-12-15 13:02:37.047128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.279 [2024-12-15 13:02:37.067656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.279 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.279 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.279 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0F8YKkzKau 00:22:29.539 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.799 [2024-12-15 13:02:37.518232] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.799 [2024-12-15 13:02:37.529115] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.799 [2024-12-15 13:02:37.529137] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.799 [2024-12-15 13:02:37.529159] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:29.799 [2024-12-15 13:02:37.529531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19540c0 (107): Transport endpoint is not connected 00:22:29.799 [2024-12-15 13:02:37.530524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19540c0 (9): Bad file descriptor 00:22:29.799 [2024-12-15 13:02:37.531526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:29.799 [2024-12-15 13:02:37.531535] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:29.799 [2024-12-15 13:02:37.531542] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:29.799 [2024-12-15 13:02:37.531550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:29.799 request: 00:22:29.799 { 00:22:29.799 "name": "TLSTEST", 00:22:29.799 "trtype": "tcp", 00:22:29.799 "traddr": "10.0.0.2", 00:22:29.799 "adrfam": "ipv4", 00:22:29.799 "trsvcid": "4420", 00:22:29.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.799 "prchk_reftag": false, 00:22:29.799 "prchk_guard": false, 00:22:29.799 "hdgst": false, 00:22:29.799 "ddgst": false, 00:22:29.799 "psk": "key0", 00:22:29.799 "allow_unrecognized_csi": false, 00:22:29.799 "method": "bdev_nvme_attach_controller", 00:22:29.799 "req_id": 1 00:22:29.799 } 00:22:29.799 Got JSON-RPC error response 00:22:29.799 response: 00:22:29.799 { 00:22:29.799 "code": -5, 00:22:29.799 "message": "Input/output error" 00:22:29.799 } 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018204 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018204 ']' 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018204 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018204 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:29.799 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018204' 00:22:29.799 killing process with pid 1018204 00:22:29.800 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018204 00:22:29.800 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.800 00:22:29.800 Latency(us) 00:22:29.800 [2024-12-15T12:02:37.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.800 [2024-12-15T12:02:37.707Z] =================================================================================================================== 00:22:29.800 [2024-12-15T12:02:37.707Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.800 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018204 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018245 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018245 /var/tmp/bdevperf.sock 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018245 ']' 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.060 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.060 [2024-12-15 13:02:37.809340] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:30.061 [2024-12-15 13:02:37.809388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018245 ] 00:22:30.061 [2024-12-15 13:02:37.886533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.061 [2024-12-15 13:02:37.906133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.320 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.320 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:30.320 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:30.320 [2024-12-15 13:02:38.160730] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:30.320 [2024-12-15 13:02:38.160760] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:30.320 request: 00:22:30.320 { 00:22:30.320 "name": "key0", 00:22:30.320 "path": "", 00:22:30.320 "method": "keyring_file_add_key", 00:22:30.320 "req_id": 1 00:22:30.320 } 00:22:30.320 Got JSON-RPC error response 00:22:30.320 response: 00:22:30.320 { 00:22:30.320 "code": -1, 00:22:30.320 "message": "Operation not permitted" 00:22:30.320 } 00:22:30.320 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:30.579 [2024-12-15 13:02:38.373376] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.579 [2024-12-15 13:02:38.373406] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:30.579 request: 00:22:30.579 { 00:22:30.579 "name": "TLSTEST", 00:22:30.579 "trtype": "tcp", 00:22:30.579 "traddr": "10.0.0.2", 00:22:30.579 "adrfam": "ipv4", 00:22:30.579 "trsvcid": "4420", 00:22:30.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.579 "prchk_reftag": false, 00:22:30.579 "prchk_guard": false, 00:22:30.580 "hdgst": false, 00:22:30.580 "ddgst": false, 00:22:30.580 "psk": "key0", 00:22:30.580 "allow_unrecognized_csi": false, 00:22:30.580 "method": "bdev_nvme_attach_controller", 00:22:30.580 "req_id": 1 00:22:30.580 } 00:22:30.580 Got JSON-RPC error response 00:22:30.580 response: 00:22:30.580 { 00:22:30.580 "code": -126, 00:22:30.580 "message": "Required key not available" 00:22:30.580 } 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018245 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018245 ']' 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018245 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018245 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018245' 00:22:30.580 killing process with pid 1018245 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018245 00:22:30.580 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.580 00:22:30.580 Latency(us) 00:22:30.580 [2024-12-15T12:02:38.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.580 [2024-12-15T12:02:38.487Z] =================================================================================================================== 00:22:30.580 [2024-12-15T12:02:38.487Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.580 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018245 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1013718 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1013718 ']' 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1013718 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1013718 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1013718' 00:22:30.839 killing process with pid 1013718 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1013718 00:22:30.839 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1013718 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vIBqfv6WTu 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vIBqfv6WTu 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1018491 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1018491 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018491 ']' 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.099 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.099 [2024-12-15 13:02:38.910125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:31.099 [2024-12-15 13:02:38.910175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.099 [2024-12-15 13:02:38.989146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.359 [2024-12-15 13:02:39.009911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.359 [2024-12-15 13:02:39.009947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.359 [2024-12-15 13:02:39.009955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.359 [2024-12-15 13:02:39.009961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.359 [2024-12-15 13:02:39.009966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.359 [2024-12-15 13:02:39.010455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vIBqfv6WTu 00:22:31.359 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:31.618 [2024-12-15 13:02:39.312257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.618 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:31.878 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:31.878 [2024-12-15 13:02:39.729341] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.878 [2024-12-15 13:02:39.729527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.878 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:32.137 malloc0 00:22:32.137 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:32.397 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vIBqfv6WTu 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vIBqfv6WTu 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018737 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018737 /var/tmp/bdevperf.sock 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018737 ']' 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.657 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.917 [2024-12-15 13:02:40.590476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:32.917 [2024-12-15 13:02:40.590526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018737 ] 00:22:32.917 [2024-12-15 13:02:40.664114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.917 [2024-12-15 13:02:40.687030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.917 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.917 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.917 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:33.177 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:33.437 [2024-12-15 13:02:41.162172] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.437 TLSTESTn1 00:22:33.437 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:33.696 Running I/O for 10 seconds... 00:22:35.575 5395.00 IOPS, 21.07 MiB/s [2024-12-15T12:02:44.421Z] 5548.00 IOPS, 21.67 MiB/s [2024-12-15T12:02:45.799Z] 5578.00 IOPS, 21.79 MiB/s [2024-12-15T12:02:46.367Z] 5602.50 IOPS, 21.88 MiB/s [2024-12-15T12:02:47.749Z] 5592.40 IOPS, 21.85 MiB/s [2024-12-15T12:02:48.688Z] 5604.67 IOPS, 21.89 MiB/s [2024-12-15T12:02:49.626Z] 5608.43 IOPS, 21.91 MiB/s [2024-12-15T12:02:50.564Z] 5617.00 IOPS, 21.94 MiB/s [2024-12-15T12:02:51.503Z] 5572.56 IOPS, 21.77 MiB/s [2024-12-15T12:02:51.503Z] 5578.90 IOPS, 21.79 MiB/s 00:22:43.596 Latency(us) 00:22:43.596 [2024-12-15T12:02:51.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.596 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.596 Verification LBA range: start 0x0 length 0x2000 00:22:43.596 TLSTESTn1 : 10.01 5583.72 21.81 0.00 0.00 22889.97 5554.96 23343.30 00:22:43.596 [2024-12-15T12:02:51.503Z] =================================================================================================================== 00:22:43.596 [2024-12-15T12:02:51.503Z] Total : 5583.72 21.81 0.00 0.00 22889.97 5554.96 23343.30 00:22:43.596 { 00:22:43.596 "results": [ 00:22:43.596 { 00:22:43.596 "job": "TLSTESTn1", 00:22:43.596 "core_mask": "0x4", 00:22:43.596 "workload": "verify", 00:22:43.596 "status": "finished", 00:22:43.596 "verify_range": { 00:22:43.596 "start": 0, 00:22:43.596 "length": 8192 00:22:43.596 }, 00:22:43.596 "queue_depth": 128, 00:22:43.596 "io_size": 4096, 00:22:43.596 "runtime": 10.013938, 00:22:43.596 "iops": 5583.717414667437, 00:22:43.596 "mibps": 21.811396151044676, 00:22:43.596 "io_failed": 0, 00:22:43.596 "io_timeout": 0, 00:22:43.596 "avg_latency_us": 22889.967288920685, 00:22:43.596 "min_latency_us": 5554.95619047619, 00:22:43.596 "max_latency_us": 23343.299047619046 00:22:43.596 } 00:22:43.596 ], 00:22:43.596 "core_count": 1 00:22:43.596 } 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1018737 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018737 ']' 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018737 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018737 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018737' 00:22:43.596 killing process with pid 1018737 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018737 00:22:43.596 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.596 00:22:43.596 Latency(us) 00:22:43.596 [2024-12-15T12:02:51.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.596 [2024-12-15T12:02:51.503Z] =================================================================================================================== 00:22:43.596 [2024-12-15T12:02:51.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.596 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018737 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vIBqfv6WTu 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vIBqfv6WTu 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vIBqfv6WTu 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vIBqfv6WTu 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vIBqfv6WTu 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1020522 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1020522 /var/tmp/bdevperf.sock 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020522 ']' 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.856 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.856 [2024-12-15 13:02:51.675129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:43.856 [2024-12-15 13:02:51.675177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020522 ] 00:22:43.856 [2024-12-15 13:02:51.747888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.116 [2024-12-15 13:02:51.767653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.116 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.116 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.116 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:44.376 [2024-12-15 13:02:52.029970] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vIBqfv6WTu': 0100666 00:22:44.376 [2024-12-15 13:02:52.030011] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:44.376 request: 00:22:44.376 { 00:22:44.376 "name": "key0", 00:22:44.376 "path": "/tmp/tmp.vIBqfv6WTu", 00:22:44.376 "method": "keyring_file_add_key", 00:22:44.376 "req_id": 1 00:22:44.376 } 00:22:44.376 Got JSON-RPC error response 00:22:44.376 response: 00:22:44.376 { 00:22:44.376 "code": -1, 00:22:44.376 "message": "Operation not permitted" 00:22:44.376 } 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.376 [2024-12-15 13:02:52.230571] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.376 [2024-12-15 13:02:52.230605] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:44.376 request: 00:22:44.376 { 00:22:44.376 "name": "TLSTEST", 00:22:44.376 "trtype": "tcp", 00:22:44.376 "traddr": "10.0.0.2", 00:22:44.376 "adrfam": "ipv4", 00:22:44.376 "trsvcid": "4420", 00:22:44.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.376 "prchk_reftag": false, 00:22:44.376 "prchk_guard": false, 00:22:44.376 "hdgst": false, 00:22:44.376 "ddgst": false, 00:22:44.376 "psk": "key0", 00:22:44.376 "allow_unrecognized_csi": false, 00:22:44.376 "method": "bdev_nvme_attach_controller", 00:22:44.376 "req_id": 1 00:22:44.376 } 00:22:44.376 Got JSON-RPC error response 00:22:44.376 response: 00:22:44.376 { 00:22:44.376 "code": -126, 00:22:44.376 "message": "Required key not available" 00:22:44.376 } 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1020522 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020522 ']' 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020522 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.376 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020522 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020522' 00:22:44.636 killing process with pid 1020522 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020522 00:22:44.636 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.636 00:22:44.636 Latency(us) 00:22:44.636 [2024-12-15T12:02:52.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.636 [2024-12-15T12:02:52.543Z] =================================================================================================================== 00:22:44.636 [2024-12-15T12:02:52.543Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020522 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1018491 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018491 ']' 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018491 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018491 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018491' 00:22:44.636 killing process with pid 1018491 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018491 00:22:44.636 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018491 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1020757 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1020757 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020757 ']' 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.896 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.896 [2024-12-15 13:02:52.730474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:44.896 [2024-12-15 13:02:52.730522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.156 [2024-12-15 13:02:52.809239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.156 [2024-12-15 13:02:52.829680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.156 [2024-12-15 13:02:52.829717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.156 [2024-12-15 13:02:52.829725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.156 [2024-12-15 13:02:52.829731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.156 [2024-12-15 13:02:52.829737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.156 [2024-12-15 13:02:52.830257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vIBqfv6WTu 00:22:45.156 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.416 [2024-12-15 13:02:53.133213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.416 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.676 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.676 [2024-12-15 13:02:53.522224] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.676 [2024-12-15 13:02:53.522415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.676 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:45.935 malloc0 00:22:45.935 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.194 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:46.453 [2024-12-15 13:02:54.131790] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vIBqfv6WTu': 0100666 00:22:46.453 [2024-12-15 13:02:54.131817] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:46.453 request: 00:22:46.453 { 00:22:46.453 "name": "key0", 00:22:46.453 "path": "/tmp/tmp.vIBqfv6WTu", 00:22:46.453 "method": "keyring_file_add_key", 00:22:46.453 "req_id": 1 00:22:46.453 } 00:22:46.453 Got JSON-RPC error response 00:22:46.453 response: 00:22:46.453 { 00:22:46.453 "code": -1, 00:22:46.453 "message": "Operation not permitted" 00:22:46.453 } 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:46.453 [2024-12-15 13:02:54.320296] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:46.453 [2024-12-15 13:02:54.320324] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:46.453 request: 00:22:46.453 { 00:22:46.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.453 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.453 "psk": "key0", 00:22:46.453 "method": "nvmf_subsystem_add_host", 00:22:46.453 "req_id": 1 00:22:46.453 } 00:22:46.453 Got JSON-RPC error response 00:22:46.453 response: 00:22:46.453 { 00:22:46.453 "code": -32603, 00:22:46.453 "message": "Internal error" 00:22:46.453 } 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.453 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1020757 00:22:46.454 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020757 ']' 00:22:46.454 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020757 00:22:46.454 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.454 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020757 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020757' 00:22:46.713 killing process with pid 1020757 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020757 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020757 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vIBqfv6WTu 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021021 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021021 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021021 ']' 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.713 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.973 [2024-12-15 13:02:54.626484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:46.973 [2024-12-15 13:02:54.626538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.973 [2024-12-15 13:02:54.702298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.973 [2024-12-15 13:02:54.721613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.973 [2024-12-15 13:02:54.721648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.973 [2024-12-15 13:02:54.721655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.973 [2024-12-15 13:02:54.721661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.973 [2024-12-15 13:02:54.721666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.973 [2024-12-15 13:02:54.722174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vIBqfv6WTu 00:22:46.973 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.232 [2024-12-15 13:02:55.024507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.232 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:47.491 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:47.749 [2024-12-15 13:02:55.421512] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.749 [2024-12-15 13:02:55.421703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.749 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:47.749 malloc0 00:22:47.749 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:48.008 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:48.267 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.526 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.526 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1021269 00:22:48.526 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1021269 /var/tmp/bdevperf.sock 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021269 ']' 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.527 [2024-12-15 13:02:56.248785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:48.527 [2024-12-15 13:02:56.248855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021269 ] 00:22:48.527 [2024-12-15 13:02:56.318442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.527 [2024-12-15 13:02:56.340474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.527 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:22:48.786 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.045 [2024-12-15 13:02:56.819495] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.045 TLSTESTn1 00:22:49.045 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:49.305 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:49.305 "subsystems": [ 00:22:49.305 { 00:22:49.305 "subsystem": "keyring", 00:22:49.305 "config": [ 00:22:49.305 { 00:22:49.305 "method": "keyring_file_add_key", 00:22:49.305 "params": { 00:22:49.305 "name": "key0", 00:22:49.305 "path": "/tmp/tmp.vIBqfv6WTu" 00:22:49.305 } 00:22:49.305 } 00:22:49.305 ] 00:22:49.305 }, 00:22:49.305 { 00:22:49.305 "subsystem": "iobuf", 00:22:49.305 "config": [ 00:22:49.305 { 00:22:49.305 "method": "iobuf_set_options", 00:22:49.305 "params": { 00:22:49.305 "small_pool_count": 8192, 00:22:49.305 "large_pool_count": 1024, 00:22:49.305 "small_bufsize": 8192, 00:22:49.305 "large_bufsize": 135168, 00:22:49.305 "enable_numa": false 00:22:49.305 } 00:22:49.305 } 00:22:49.305 ] 00:22:49.305 }, 00:22:49.305 { 00:22:49.305 "subsystem": "sock", 00:22:49.305 "config": [ 00:22:49.305 { 00:22:49.305 "method": "sock_set_default_impl", 00:22:49.305 "params": { 00:22:49.305 "impl_name": "posix" 00:22:49.305 } 00:22:49.305 }, 00:22:49.305 { 00:22:49.305 "method": "sock_impl_set_options", 00:22:49.305 "params": { 00:22:49.305 "impl_name": "ssl", 00:22:49.305 "recv_buf_size": 4096, 00:22:49.305 "send_buf_size": 4096, 00:22:49.305 "enable_recv_pipe": true, 00:22:49.305 "enable_quickack": false, 00:22:49.305 "enable_placement_id": 0, 00:22:49.305 "enable_zerocopy_send_server": true, 00:22:49.305 "enable_zerocopy_send_client": false, 00:22:49.305 "zerocopy_threshold": 0, 00:22:49.305 "tls_version": 0, 00:22:49.306 "enable_ktls": false 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "sock_impl_set_options", 00:22:49.306 "params": { 00:22:49.306 "impl_name": "posix", 00:22:49.306 "recv_buf_size": 2097152, 00:22:49.306 "send_buf_size": 2097152, 00:22:49.306 "enable_recv_pipe": true, 00:22:49.306 "enable_quickack": false, 00:22:49.306 "enable_placement_id": 0, 00:22:49.306 "enable_zerocopy_send_server": true, 00:22:49.306 "enable_zerocopy_send_client": false, 00:22:49.306 "zerocopy_threshold": 0, 00:22:49.306 "tls_version": 0, 00:22:49.306 "enable_ktls": false 00:22:49.306 } 00:22:49.306 } 00:22:49.306 ] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "vmd", 00:22:49.306 "config": [] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "accel", 00:22:49.306 "config": [ 00:22:49.306 { 00:22:49.306 "method": "accel_set_options", 00:22:49.306 "params": { 00:22:49.306 "small_cache_size": 128, 00:22:49.306 "large_cache_size": 16, 00:22:49.306 "task_count": 2048, 00:22:49.306 "sequence_count": 2048, 00:22:49.306 "buf_count": 2048 00:22:49.306 } 00:22:49.306 } 00:22:49.306 ] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "bdev", 00:22:49.306 "config": [ 00:22:49.306 { 00:22:49.306 "method": "bdev_set_options", 00:22:49.306 "params": { 00:22:49.306 "bdev_io_pool_size": 65535, 00:22:49.306 "bdev_io_cache_size": 256, 00:22:49.306 "bdev_auto_examine": true, 00:22:49.306 "iobuf_small_cache_size": 128, 00:22:49.306 "iobuf_large_cache_size": 16 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_raid_set_options", 00:22:49.306 "params": { 00:22:49.306 "process_window_size_kb": 1024, 00:22:49.306 "process_max_bandwidth_mb_sec": 0 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_iscsi_set_options", 00:22:49.306 "params": { 00:22:49.306 "timeout_sec": 30 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_nvme_set_options", 00:22:49.306 "params": { 00:22:49.306 "action_on_timeout": "none", 00:22:49.306 "timeout_us": 0, 00:22:49.306 "timeout_admin_us": 0, 00:22:49.306 "keep_alive_timeout_ms": 10000, 00:22:49.306 "arbitration_burst": 0, 00:22:49.306 "low_priority_weight": 0, 00:22:49.306 "medium_priority_weight": 0, 00:22:49.306 "high_priority_weight": 0, 00:22:49.306 "nvme_adminq_poll_period_us": 10000, 00:22:49.306 "nvme_ioq_poll_period_us": 0, 00:22:49.306 "io_queue_requests": 0, 00:22:49.306 "delay_cmd_submit": true, 00:22:49.306 "transport_retry_count": 4, 00:22:49.306 "bdev_retry_count": 3, 00:22:49.306 "transport_ack_timeout": 0, 00:22:49.306 "ctrlr_loss_timeout_sec": 0, 00:22:49.306 "reconnect_delay_sec": 0, 00:22:49.306 "fast_io_fail_timeout_sec": 0, 00:22:49.306 "disable_auto_failback": false, 00:22:49.306 "generate_uuids": false, 00:22:49.306 "transport_tos": 0, 00:22:49.306 "nvme_error_stat": false, 00:22:49.306 "rdma_srq_size": 0, 00:22:49.306 "io_path_stat": false, 00:22:49.306 "allow_accel_sequence": false, 00:22:49.306 "rdma_max_cq_size": 0, 00:22:49.306 "rdma_cm_event_timeout_ms": 0, 00:22:49.306 "dhchap_digests": [ 00:22:49.306 "sha256", 00:22:49.306 "sha384", 00:22:49.306 "sha512" 00:22:49.306 ], 00:22:49.306 "dhchap_dhgroups": [ 00:22:49.306 "null", 00:22:49.306 "ffdhe2048", 00:22:49.306 "ffdhe3072", 00:22:49.306 "ffdhe4096", 00:22:49.306 "ffdhe6144", 00:22:49.306 "ffdhe8192" 00:22:49.306 ], 00:22:49.306 "rdma_umr_per_io": false 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_nvme_set_hotplug", 00:22:49.306 "params": { 00:22:49.306 "period_us": 100000, 00:22:49.306 "enable": false 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_malloc_create", 00:22:49.306 "params": { 00:22:49.306 "name": "malloc0", 00:22:49.306 "num_blocks": 8192, 00:22:49.306 "block_size": 4096, 00:22:49.306 "physical_block_size": 4096, 00:22:49.306 "uuid": "d788be03-600d-444b-8153-17ef2282deb3", 00:22:49.306 "optimal_io_boundary": 0, 00:22:49.306 "md_size": 0, 00:22:49.306 "dif_type": 0, 00:22:49.306 "dif_is_head_of_md": false, 00:22:49.306 "dif_pi_format": 0 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "bdev_wait_for_examine" 00:22:49.306 } 00:22:49.306 ] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "nbd", 00:22:49.306 "config": [] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "scheduler", 00:22:49.306 "config": [ 00:22:49.306 { 00:22:49.306 "method": "framework_set_scheduler", 00:22:49.306 "params": { 00:22:49.306 "name": "static" 00:22:49.306 } 00:22:49.306 } 00:22:49.306 ] 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "subsystem": "nvmf", 00:22:49.306 "config": [ 00:22:49.306 { 00:22:49.306 "method": "nvmf_set_config", 00:22:49.306 "params": { 00:22:49.306 "discovery_filter": "match_any", 00:22:49.306 "admin_cmd_passthru": { 00:22:49.306 "identify_ctrlr": false 00:22:49.306 }, 00:22:49.306 "dhchap_digests": [ 00:22:49.306 "sha256", 00:22:49.306 "sha384", 00:22:49.306 "sha512" 00:22:49.306 ], 00:22:49.306 "dhchap_dhgroups": [ 00:22:49.306 "null", 00:22:49.306 "ffdhe2048", 00:22:49.306 "ffdhe3072", 00:22:49.306 "ffdhe4096", 00:22:49.306 "ffdhe6144", 00:22:49.306 "ffdhe8192" 00:22:49.306 ] 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "nvmf_set_max_subsystems", 00:22:49.306 "params": { 00:22:49.306 "max_subsystems": 1024 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "nvmf_set_crdt", 00:22:49.306 "params": { 00:22:49.306 "crdt1": 0, 00:22:49.306 "crdt2": 0, 00:22:49.306 "crdt3": 0 00:22:49.306 } 00:22:49.306 }, 00:22:49.306 { 00:22:49.306 "method": "nvmf_create_transport", 00:22:49.306 "params": { 00:22:49.306 "trtype": "TCP", 00:22:49.306 "max_queue_depth": 128, 00:22:49.306 "max_io_qpairs_per_ctrlr": 127, 00:22:49.307 "in_capsule_data_size": 4096, 00:22:49.307 "max_io_size": 131072, 00:22:49.307 "io_unit_size": 131072, 00:22:49.307 "max_aq_depth": 128, 00:22:49.307 "num_shared_buffers": 511, 00:22:49.307 "buf_cache_size": 4294967295, 00:22:49.307 "dif_insert_or_strip": false, 00:22:49.307 "zcopy": false, 00:22:49.307 "c2h_success": false, 00:22:49.307 "sock_priority": 0, 00:22:49.307 "abort_timeout_sec": 1, 00:22:49.307 "ack_timeout": 0, 00:22:49.307 "data_wr_pool_size": 0 00:22:49.307 } 00:22:49.307 }, 00:22:49.307 { 00:22:49.307 "method": "nvmf_create_subsystem", 00:22:49.307 "params": { 00:22:49.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.307 "allow_any_host": false, 00:22:49.307 "serial_number": "SPDK00000000000001", 00:22:49.307 "model_number": "SPDK bdev Controller", 00:22:49.307 "max_namespaces": 10, 00:22:49.307 "min_cntlid": 1, 00:22:49.307 "max_cntlid": 65519, 00:22:49.307 "ana_reporting": false 00:22:49.307 } 00:22:49.307 }, 00:22:49.307 { 00:22:49.307 "method": "nvmf_subsystem_add_host", 00:22:49.307 "params": { 00:22:49.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.307 "host": "nqn.2016-06.io.spdk:host1", 00:22:49.307 "psk": "key0" 00:22:49.307 } 00:22:49.307 }, 00:22:49.307 { 00:22:49.307 "method": "nvmf_subsystem_add_ns", 00:22:49.307 "params": { 00:22:49.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.307 "namespace": { 00:22:49.307 "nsid": 1, 00:22:49.307 "bdev_name": "malloc0", 00:22:49.307 "nguid": "D788BE03600D444B815317EF2282DEB3", 00:22:49.307 "uuid": "d788be03-600d-444b-8153-17ef2282deb3", 00:22:49.307 "no_auto_visible": false 00:22:49.307 } 00:22:49.307 } 00:22:49.307 }, 00:22:49.307 { 00:22:49.307 "method": "nvmf_subsystem_add_listener", 00:22:49.307 "params": { 00:22:49.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.307 "listen_address": { 00:22:49.307 "trtype": "TCP", 00:22:49.307 "adrfam": "IPv4", 00:22:49.307 "traddr": "10.0.0.2", 00:22:49.307 "trsvcid": "4420" 00:22:49.307 }, 00:22:49.307 "secure_channel": true 00:22:49.307 } 00:22:49.307 } 00:22:49.307 ] 00:22:49.307 } 00:22:49.307 ] 00:22:49.307 }' 00:22:49.307 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:49.567 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:49.567 "subsystems": [ 00:22:49.567 { 00:22:49.567 "subsystem": "keyring", 00:22:49.567 "config": [ 00:22:49.567 { 00:22:49.567 "method": "keyring_file_add_key", 00:22:49.567 "params": { 00:22:49.567 "name": "key0", 00:22:49.567 "path": "/tmp/tmp.vIBqfv6WTu" 00:22:49.567 } 00:22:49.567 } 00:22:49.567 ] 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "subsystem": "iobuf", 00:22:49.567 "config": [ 00:22:49.567 { 00:22:49.567 "method": "iobuf_set_options", 00:22:49.567 "params": { 00:22:49.567 "small_pool_count": 8192, 00:22:49.567 "large_pool_count": 1024, 00:22:49.567 "small_bufsize": 8192, 00:22:49.567 "large_bufsize": 135168, 00:22:49.567 "enable_numa": false 00:22:49.567 } 00:22:49.567 } 00:22:49.567 ] 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "subsystem": "sock", 00:22:49.567 "config": [ 00:22:49.567 { 00:22:49.567 "method": "sock_set_default_impl", 00:22:49.567 "params": { 00:22:49.567 "impl_name": "posix" 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "sock_impl_set_options", 00:22:49.567 "params": { 00:22:49.567 "impl_name": "ssl", 00:22:49.567 "recv_buf_size": 4096, 00:22:49.567 "send_buf_size": 4096, 00:22:49.567 "enable_recv_pipe": true, 00:22:49.567 "enable_quickack": false, 00:22:49.567 "enable_placement_id": 0, 00:22:49.567 "enable_zerocopy_send_server": true, 00:22:49.567 "enable_zerocopy_send_client": false, 00:22:49.567 "zerocopy_threshold": 0, 00:22:49.567 "tls_version": 0, 00:22:49.567 "enable_ktls": false 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "sock_impl_set_options", 00:22:49.567 "params": { 00:22:49.567 "impl_name": "posix", 00:22:49.567 "recv_buf_size": 2097152, 00:22:49.567 "send_buf_size": 2097152, 00:22:49.567 "enable_recv_pipe": true, 00:22:49.567 "enable_quickack": false, 00:22:49.567 "enable_placement_id": 0, 00:22:49.567 "enable_zerocopy_send_server": true, 00:22:49.567 "enable_zerocopy_send_client": false, 00:22:49.567 "zerocopy_threshold": 0, 00:22:49.567 "tls_version": 0, 00:22:49.567 "enable_ktls": false 00:22:49.567 } 00:22:49.567 } 00:22:49.567 ] 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "subsystem": "vmd", 00:22:49.567 "config": [] 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "subsystem": "accel", 00:22:49.567 "config": [ 00:22:49.567 { 00:22:49.567 "method": "accel_set_options", 00:22:49.567 "params": { 00:22:49.567 "small_cache_size": 128, 00:22:49.567 "large_cache_size": 16, 00:22:49.567 "task_count": 2048, 00:22:49.567 "sequence_count": 2048, 00:22:49.567 "buf_count": 2048 00:22:49.567 } 00:22:49.567 } 00:22:49.567 ] 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "subsystem": "bdev", 00:22:49.567 "config": [ 00:22:49.567 { 00:22:49.567 "method": "bdev_set_options", 00:22:49.567 "params": { 00:22:49.567 "bdev_io_pool_size": 65535, 00:22:49.567 "bdev_io_cache_size": 256, 00:22:49.567 "bdev_auto_examine": true, 00:22:49.567 "iobuf_small_cache_size": 128, 00:22:49.567 "iobuf_large_cache_size": 16 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "bdev_raid_set_options", 00:22:49.567 "params": { 00:22:49.567 "process_window_size_kb": 1024, 00:22:49.567 "process_max_bandwidth_mb_sec": 0 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "bdev_iscsi_set_options", 00:22:49.567 "params": { 00:22:49.567 "timeout_sec": 30 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "bdev_nvme_set_options", 00:22:49.567 "params": { 00:22:49.567 "action_on_timeout": "none", 00:22:49.567 "timeout_us": 0, 00:22:49.567 "timeout_admin_us": 0, 00:22:49.567 "keep_alive_timeout_ms": 10000, 00:22:49.567 "arbitration_burst": 0, 00:22:49.567 "low_priority_weight": 0, 00:22:49.567 "medium_priority_weight": 0, 00:22:49.567 "high_priority_weight": 0, 00:22:49.567 "nvme_adminq_poll_period_us": 10000, 00:22:49.567 "nvme_ioq_poll_period_us": 0, 00:22:49.567 "io_queue_requests": 512, 00:22:49.567 "delay_cmd_submit": true, 00:22:49.567 "transport_retry_count": 4, 00:22:49.567 "bdev_retry_count": 3, 00:22:49.567 "transport_ack_timeout": 0, 00:22:49.567 "ctrlr_loss_timeout_sec": 0, 00:22:49.567 "reconnect_delay_sec": 0, 00:22:49.567 "fast_io_fail_timeout_sec": 0, 00:22:49.567 "disable_auto_failback": false, 00:22:49.567 "generate_uuids": false, 00:22:49.567 "transport_tos": 0, 00:22:49.567 "nvme_error_stat": false, 00:22:49.567 "rdma_srq_size": 0, 00:22:49.567 "io_path_stat": false, 00:22:49.567 "allow_accel_sequence": false, 00:22:49.567 "rdma_max_cq_size": 0, 00:22:49.567 "rdma_cm_event_timeout_ms": 0, 00:22:49.567 "dhchap_digests": [ 00:22:49.567 "sha256", 00:22:49.567 "sha384", 00:22:49.567 "sha512" 00:22:49.567 ], 00:22:49.567 "dhchap_dhgroups": [ 00:22:49.567 "null", 00:22:49.567 "ffdhe2048", 00:22:49.567 "ffdhe3072", 00:22:49.567 "ffdhe4096", 00:22:49.567 "ffdhe6144", 00:22:49.567 "ffdhe8192" 00:22:49.567 ], 00:22:49.567 "rdma_umr_per_io": false 00:22:49.567 } 00:22:49.567 }, 00:22:49.567 { 00:22:49.567 "method": "bdev_nvme_attach_controller", 00:22:49.567 "params": { 00:22:49.567 "name": "TLSTEST", 00:22:49.567 "trtype": "TCP", 00:22:49.567 "adrfam": "IPv4", 00:22:49.567 "traddr": "10.0.0.2", 00:22:49.567 "trsvcid": "4420", 00:22:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.567 "prchk_reftag": false, 00:22:49.567 "prchk_guard": false, 00:22:49.567 "ctrlr_loss_timeout_sec": 0, 00:22:49.567 "reconnect_delay_sec": 0, 00:22:49.567 "fast_io_fail_timeout_sec": 0, 00:22:49.567 "psk": "key0", 00:22:49.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.567 "hdgst": false, 00:22:49.567 "ddgst": false, 00:22:49.567 "multipath": "multipath" 00:22:49.567 } 00:22:49.567 }, 00:22:49.568 { 00:22:49.568 "method": "bdev_nvme_set_hotplug", 00:22:49.568 "params": { 00:22:49.568 "period_us": 100000, 00:22:49.568 "enable": false 00:22:49.568 } 00:22:49.568 }, 00:22:49.568 { 00:22:49.568 "method": "bdev_wait_for_examine" 00:22:49.568 } 00:22:49.568 ] 00:22:49.568 }, 00:22:49.568 { 00:22:49.568 "subsystem": "nbd", 00:22:49.568 "config": [] 00:22:49.568 } 00:22:49.568 ] 00:22:49.568 }' 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1021269 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021269 ']' 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021269 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.568 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021269 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021269' 00:22:49.828 killing process with pid 1021269 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021269 00:22:49.828 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.828 00:22:49.828 Latency(us) 00:22:49.828 [2024-12-15T12:02:57.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.828 [2024-12-15T12:02:57.735Z] =================================================================================================================== 00:22:49.828 [2024-12-15T12:02:57.735Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021269 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1021021 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021021 ']' 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021021 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021021 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021021' 00:22:49.828 killing process with pid 1021021 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021021 00:22:49.828 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021021 00:22:50.088 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:50.088 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.088 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.088 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:50.088 "subsystems": [ 00:22:50.088 { 00:22:50.088 "subsystem": "keyring", 00:22:50.088 "config": [ 00:22:50.088 { 00:22:50.088 "method": "keyring_file_add_key", 00:22:50.088 "params": { 00:22:50.088 "name": "key0", 00:22:50.088 "path": "/tmp/tmp.vIBqfv6WTu" 00:22:50.088 } 00:22:50.088 } 00:22:50.088 ] 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "subsystem": "iobuf", 00:22:50.088 "config": [ 00:22:50.088 { 00:22:50.088 "method": "iobuf_set_options", 00:22:50.088 "params": { 00:22:50.088 "small_pool_count": 8192, 00:22:50.088 "large_pool_count": 1024, 00:22:50.088 "small_bufsize": 8192, 00:22:50.088 "large_bufsize": 135168, 00:22:50.088 "enable_numa": false 00:22:50.088 } 00:22:50.088 } 00:22:50.088 ] 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "subsystem": "sock", 00:22:50.088 "config": [ 00:22:50.088 { 00:22:50.088 "method": "sock_set_default_impl", 00:22:50.088 "params": { 00:22:50.088 "impl_name": "posix" 00:22:50.088 } 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "method": "sock_impl_set_options", 00:22:50.088 "params": { 00:22:50.088 "impl_name": "ssl", 00:22:50.088 "recv_buf_size": 4096, 00:22:50.088 "send_buf_size": 4096, 00:22:50.088 "enable_recv_pipe": true, 00:22:50.088 "enable_quickack": false, 00:22:50.088 "enable_placement_id": 0, 00:22:50.088 "enable_zerocopy_send_server": true, 00:22:50.088 "enable_zerocopy_send_client": false, 00:22:50.088 "zerocopy_threshold": 0, 00:22:50.089 "tls_version": 0, 00:22:50.089 "enable_ktls": false 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "sock_impl_set_options", 00:22:50.089 "params": { 00:22:50.089 "impl_name": "posix", 00:22:50.089 "recv_buf_size": 2097152, 00:22:50.089 "send_buf_size": 2097152, 00:22:50.089 "enable_recv_pipe": true, 00:22:50.089 "enable_quickack": false, 00:22:50.089 "enable_placement_id": 0, 00:22:50.089 "enable_zerocopy_send_server": true, 00:22:50.089 "enable_zerocopy_send_client": false, 00:22:50.089 "zerocopy_threshold": 0, 00:22:50.089 "tls_version": 0, 00:22:50.089 "enable_ktls": false 00:22:50.089 } 00:22:50.089 } 00:22:50.089 ] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "vmd", 00:22:50.089 "config": [] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "accel", 00:22:50.089 "config": [ 00:22:50.089 { 00:22:50.089 "method": "accel_set_options", 00:22:50.089 "params": { 00:22:50.089 "small_cache_size": 128, 00:22:50.089 "large_cache_size": 16, 00:22:50.089 "task_count": 2048, 00:22:50.089 "sequence_count": 2048, 00:22:50.089 "buf_count": 2048 00:22:50.089 } 00:22:50.089 } 00:22:50.089 ] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "bdev", 00:22:50.089 "config": [ 00:22:50.089 { 00:22:50.089 "method": "bdev_set_options", 00:22:50.089 "params": { 00:22:50.089 "bdev_io_pool_size": 65535, 00:22:50.089 "bdev_io_cache_size": 256, 00:22:50.089 "bdev_auto_examine": true, 00:22:50.089 "iobuf_small_cache_size": 128, 00:22:50.089 "iobuf_large_cache_size": 16 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_raid_set_options", 00:22:50.089 "params": { 00:22:50.089 "process_window_size_kb": 1024, 00:22:50.089 "process_max_bandwidth_mb_sec": 0 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_iscsi_set_options", 00:22:50.089 "params": { 00:22:50.089 "timeout_sec": 30 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_nvme_set_options", 00:22:50.089 "params": { 00:22:50.089 "action_on_timeout": "none", 00:22:50.089 "timeout_us": 0, 00:22:50.089 "timeout_admin_us": 0, 00:22:50.089 "keep_alive_timeout_ms": 10000, 00:22:50.089 "arbitration_burst": 0, 00:22:50.089 "low_priority_weight": 0, 00:22:50.089 "medium_priority_weight": 0, 00:22:50.089 "high_priority_weight": 0, 00:22:50.089 "nvme_adminq_poll_period_us": 10000, 00:22:50.089 "nvme_ioq_poll_period_us": 0, 00:22:50.089 "io_queue_requests": 0, 00:22:50.089 "delay_cmd_submit": true, 00:22:50.089 "transport_retry_count": 4, 00:22:50.089 "bdev_retry_count": 3, 00:22:50.089 "transport_ack_timeout": 0, 00:22:50.089 "ctrlr_loss_timeout_sec": 0, 00:22:50.089 "reconnect_delay_sec": 0, 00:22:50.089 "fast_io_fail_timeout_sec": 0, 00:22:50.089 "disable_auto_failback": false, 00:22:50.089 "generate_uuids": false, 00:22:50.089 "transport_tos": 0, 00:22:50.089 "nvme_error_stat": false, 00:22:50.089 "rdma_srq_size": 0, 00:22:50.089 "io_path_stat": false, 00:22:50.089 "allow_accel_sequence": false, 00:22:50.089 "rdma_max_cq_size": 0, 00:22:50.089 "rdma_cm_event_timeout_ms": 0, 00:22:50.089 "dhchap_digests": [ 00:22:50.089 "sha256", 00:22:50.089 "sha384", 00:22:50.089 "sha512" 00:22:50.089 ], 00:22:50.089 "dhchap_dhgroups": [ 00:22:50.089 "null", 00:22:50.089 "ffdhe2048", 00:22:50.089 "ffdhe3072", 00:22:50.089 "ffdhe4096", 00:22:50.089 "ffdhe6144", 00:22:50.089 "ffdhe8192" 00:22:50.089 ], 00:22:50.089 "rdma_umr_per_io": false 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_nvme_set_hotplug", 00:22:50.089 "params": { 00:22:50.089 "period_us": 100000, 00:22:50.089 "enable": false 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_malloc_create", 00:22:50.089 "params": { 00:22:50.089 "name": "malloc0", 00:22:50.089 "num_blocks": 8192, 00:22:50.089 "block_size": 4096, 00:22:50.089 "physical_block_size": 4096, 00:22:50.089 "uuid": "d788be03-600d-444b-8153-17ef2282deb3", 00:22:50.089 "optimal_io_boundary": 0, 00:22:50.089 "md_size": 0, 00:22:50.089 "dif_type": 0, 00:22:50.089 "dif_is_head_of_md": false, 00:22:50.089 "dif_pi_format": 0 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "bdev_wait_for_examine" 00:22:50.089 } 00:22:50.089 ] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "nbd", 00:22:50.089 "config": [] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "scheduler", 00:22:50.089 "config": [ 00:22:50.089 { 00:22:50.089 "method": "framework_set_scheduler", 00:22:50.089 "params": { 00:22:50.089 "name": "static" 00:22:50.089 } 00:22:50.089 } 00:22:50.089 ] 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "subsystem": "nvmf", 00:22:50.089 "config": [ 00:22:50.089 { 00:22:50.089 "method": "nvmf_set_config", 00:22:50.089 "params": { 00:22:50.089 "discovery_filter": "match_any", 00:22:50.089 "admin_cmd_passthru": { 00:22:50.089 "identify_ctrlr": false 00:22:50.089 }, 00:22:50.089 "dhchap_digests": [ 00:22:50.089 "sha256", 00:22:50.089 "sha384", 00:22:50.089 "sha512" 00:22:50.089 ], 00:22:50.089 "dhchap_dhgroups": [ 00:22:50.089 "null", 00:22:50.089 "ffdhe2048", 00:22:50.089 "ffdhe3072", 00:22:50.089 "ffdhe4096", 00:22:50.089 "ffdhe6144", 00:22:50.089 "ffdhe8192" 00:22:50.089 ] 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "nvmf_set_max_subsystems", 00:22:50.089 "params": { 00:22:50.089 "max_subsystems": 1024 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "nvmf_set_crdt", 00:22:50.089 "params": { 00:22:50.089 "crdt1": 0, 00:22:50.089 "crdt2": 0, 00:22:50.089 "crdt3": 0 00:22:50.089 } 00:22:50.089 }, 00:22:50.089 { 00:22:50.089 "method": "nvmf_create_transport", 00:22:50.089 "params": { 00:22:50.089 "trtype": "TCP", 00:22:50.089 "max_queue_depth": 128, 00:22:50.089 "max_io_qpairs_per_ctrlr": 127, 00:22:50.089 "in_capsule_data_size": 4096, 00:22:50.090 "max_io_size": 131072, 00:22:50.090 "io_unit_size": 131072, 00:22:50.090 "max_aq_depth": 128, 00:22:50.090 "num_shared_buffers": 511, 00:22:50.090 "buf_cache_size": 4294967295, 00:22:50.090 "dif_insert_or_strip": false, 00:22:50.090 "zcopy": false, 00:22:50.090 "c2h_success": false, 00:22:50.090 "sock_priority": 0, 00:22:50.090 "abort_timeout_sec": 1, 00:22:50.090 "ack_timeout": 0, 00:22:50.090 "data_wr_pool_size": 0 00:22:50.090 } 00:22:50.090 }, 00:22:50.090 { 00:22:50.090 "method": "nvmf_create_subsystem", 00:22:50.090 "params": { 00:22:50.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.090 "allow_any_host": false, 00:22:50.090 "serial_number": "SPDK00000000000001", 00:22:50.090 "model_number": "SPDK bdev Controller", 00:22:50.090 "max_namespaces": 10, 00:22:50.090 "min_cntlid": 1, 00:22:50.090 "max_cntlid": 65519, 00:22:50.090 "ana_reporting": false 00:22:50.090 } 00:22:50.090 }, 00:22:50.090 { 00:22:50.090 "method": "nvmf_subsystem_add_host", 00:22:50.090 "params": { 00:22:50.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.090 "host": "nqn.2016-06.io.spdk:host1", 00:22:50.090 "psk": "key0" 00:22:50.090 } 00:22:50.090 }, 00:22:50.090 { 00:22:50.090 "method": "nvmf_subsystem_add_ns", 00:22:50.090 "params": { 00:22:50.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.090 "namespace": { 00:22:50.090 "nsid": 1, 00:22:50.090 "bdev_name": "malloc0", 00:22:50.090 "nguid": "D788BE03600D444B815317EF2282DEB3", 00:22:50.090 "uuid": "d788be03-600d-444b-8153-17ef2282deb3", 00:22:50.090 "no_auto_visible": false 00:22:50.090 } 00:22:50.090 } 00:22:50.090 }, 00:22:50.090 { 00:22:50.090 "method": "nvmf_subsystem_add_listener", 00:22:50.090 "params": { 00:22:50.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.090 "listen_address": { 00:22:50.090 "trtype": "TCP", 00:22:50.090 "adrfam": "IPv4", 00:22:50.090 "traddr": "10.0.0.2", 00:22:50.090 "trsvcid": "4420" 00:22:50.090 }, 00:22:50.090 "secure_channel": true 00:22:50.090 } 00:22:50.090 } 00:22:50.090 ] 00:22:50.090 } 00:22:50.090 ] 00:22:50.090 }' 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021628 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021628 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021628 ']' 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.090 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.090 [2024-12-15 13:02:57.910758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:50.090 [2024-12-15 13:02:57.910813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.090 [2024-12-15 13:02:57.988838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.350 [2024-12-15 13:02:58.008752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.350 [2024-12-15 13:02:58.008785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.350 [2024-12-15 13:02:58.008793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.350 [2024-12-15 13:02:58.008799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.350 [2024-12-15 13:02:58.008803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.350 [2024-12-15 13:02:58.009353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.350 [2024-12-15 13:02:58.215568] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.350 [2024-12-15 13:02:58.247602] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.350 [2024-12-15 13:02:58.247783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1021753 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1021753 /var/tmp/bdevperf.sock 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021753 ']' 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.919 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:50.919 "subsystems": [ 00:22:50.919 { 00:22:50.919 "subsystem": "keyring", 00:22:50.919 "config": [ 00:22:50.919 { 00:22:50.919 "method": "keyring_file_add_key", 00:22:50.919 "params": { 00:22:50.919 "name": "key0", 00:22:50.919 "path": "/tmp/tmp.vIBqfv6WTu" 00:22:50.919 } 00:22:50.919 } 00:22:50.919 ] 00:22:50.919 }, 00:22:50.919 { 00:22:50.919 "subsystem": "iobuf", 00:22:50.919 "config": [ 00:22:50.919 { 00:22:50.919 "method": "iobuf_set_options", 00:22:50.919 "params": { 00:22:50.919 "small_pool_count": 8192, 00:22:50.919 "large_pool_count": 1024, 00:22:50.919 "small_bufsize": 8192, 00:22:50.919 "large_bufsize": 135168, 00:22:50.919 "enable_numa": false 00:22:50.919 } 00:22:50.919 } 00:22:50.919 ] 00:22:50.919 }, 00:22:50.919 { 00:22:50.920 "subsystem": "sock", 00:22:50.920 "config": [ 00:22:50.920 { 00:22:50.920 "method": "sock_set_default_impl", 00:22:50.920 "params": { 00:22:50.920 "impl_name": "posix" 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "sock_impl_set_options", 00:22:50.920 "params": { 00:22:50.920 "impl_name": "ssl", 00:22:50.920 "recv_buf_size": 4096, 00:22:50.920 "send_buf_size": 4096, 00:22:50.920 "enable_recv_pipe": true, 00:22:50.920 "enable_quickack": false, 00:22:50.920 "enable_placement_id": 0, 00:22:50.920 "enable_zerocopy_send_server": true, 00:22:50.920 "enable_zerocopy_send_client": false, 00:22:50.920 "zerocopy_threshold": 0, 00:22:50.920 "tls_version": 0, 00:22:50.920 "enable_ktls": false 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "sock_impl_set_options", 00:22:50.920 "params": { 00:22:50.920 "impl_name": "posix", 00:22:50.920 "recv_buf_size": 2097152, 00:22:50.920 "send_buf_size": 2097152, 00:22:50.920 "enable_recv_pipe": true, 00:22:50.920 "enable_quickack": false, 00:22:50.920 "enable_placement_id": 0, 00:22:50.920 "enable_zerocopy_send_server": true, 00:22:50.920 "enable_zerocopy_send_client": false, 00:22:50.920 "zerocopy_threshold": 0, 00:22:50.920 "tls_version": 0, 00:22:50.920 "enable_ktls": false 00:22:50.920 } 00:22:50.920 } 00:22:50.920 ] 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "subsystem": "vmd", 00:22:50.920 "config": [] 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "subsystem": "accel", 00:22:50.920 "config": [ 00:22:50.920 { 00:22:50.920 "method": "accel_set_options", 00:22:50.920 "params": { 00:22:50.920 "small_cache_size": 128, 00:22:50.920 "large_cache_size": 16, 00:22:50.920 "task_count": 2048, 00:22:50.920 "sequence_count": 2048, 00:22:50.920 "buf_count": 2048 00:22:50.920 } 00:22:50.920 } 00:22:50.920 ] 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "subsystem": "bdev", 00:22:50.920 "config": [ 00:22:50.920 { 00:22:50.920 "method": "bdev_set_options", 00:22:50.920 "params": { 00:22:50.920 "bdev_io_pool_size": 65535, 00:22:50.920 "bdev_io_cache_size": 256, 00:22:50.920 "bdev_auto_examine": true, 00:22:50.920 "iobuf_small_cache_size": 128, 00:22:50.920 "iobuf_large_cache_size": 16 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_raid_set_options", 00:22:50.920 "params": { 00:22:50.920 "process_window_size_kb": 1024, 00:22:50.920 "process_max_bandwidth_mb_sec": 0 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_iscsi_set_options", 00:22:50.920 "params": { 00:22:50.920 "timeout_sec": 30 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_nvme_set_options", 00:22:50.920 "params": { 00:22:50.920 "action_on_timeout": "none", 00:22:50.920 "timeout_us": 0, 00:22:50.920 "timeout_admin_us": 0, 00:22:50.920 "keep_alive_timeout_ms": 10000, 00:22:50.920 "arbitration_burst": 0, 00:22:50.920 "low_priority_weight": 0, 00:22:50.920 "medium_priority_weight": 0, 00:22:50.920 "high_priority_weight": 0, 00:22:50.920 "nvme_adminq_poll_period_us": 10000, 00:22:50.920 "nvme_ioq_poll_period_us": 0, 00:22:50.920 "io_queue_requests": 512, 00:22:50.920 "delay_cmd_submit": true, 00:22:50.920 "transport_retry_count": 4, 00:22:50.920 "bdev_retry_count": 3, 00:22:50.920 "transport_ack_timeout": 0, 00:22:50.920 "ctrlr_loss_timeout_sec": 0, 00:22:50.920 "reconnect_delay_sec": 0, 00:22:50.920 "fast_io_fail_timeout_sec": 0, 00:22:50.920 "disable_auto_failback": false, 00:22:50.920 "generate_uuids": false, 00:22:50.920 "transport_tos": 0, 00:22:50.920 "nvme_error_stat": false, 00:22:50.920 "rdma_srq_size": 0, 00:22:50.920 "io_path_stat": false, 00:22:50.920 "allow_accel_sequence": false, 00:22:50.920 "rdma_max_cq_size": 0, 00:22:50.920 "rdma_cm_event_timeout_ms": 0, 00:22:50.920 "dhchap_digests": [ 00:22:50.920 "sha256", 00:22:50.920 "sha384", 00:22:50.920 "sha512" 00:22:50.920 ], 00:22:50.920 "dhchap_dhgroups": [ 00:22:50.920 "null", 00:22:50.920 "ffdhe2048", 00:22:50.920 "ffdhe3072", 00:22:50.920 "ffdhe4096", 00:22:50.920 "ffdhe6144", 00:22:50.920 "ffdhe8192" 00:22:50.920 ], 00:22:50.920 "rdma_umr_per_io": false 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_nvme_attach_controller", 00:22:50.920 "params": { 00:22:50.920 "name": "TLSTEST", 00:22:50.920 "trtype": "TCP", 00:22:50.920 "adrfam": "IPv4", 00:22:50.920 "traddr": "10.0.0.2", 00:22:50.920 "trsvcid": "4420", 00:22:50.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.920 "prchk_reftag": false, 00:22:50.920 "prchk_guard": false, 00:22:50.920 "ctrlr_loss_timeout_sec": 0, 00:22:50.920 "reconnect_delay_sec": 0, 00:22:50.920 "fast_io_fail_timeout_sec": 0, 00:22:50.920 "psk": "key0", 00:22:50.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.920 "hdgst": false, 00:22:50.920 "ddgst": false, 00:22:50.920 "multipath": "multipath" 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_nvme_set_hotplug", 00:22:50.920 "params": { 00:22:50.920 "period_us": 100000, 00:22:50.920 "enable": false 00:22:50.920 } 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "method": "bdev_wait_for_examine" 00:22:50.920 } 00:22:50.920 ] 00:22:50.920 }, 00:22:50.920 { 00:22:50.920 "subsystem": "nbd", 00:22:50.920 "config": [] 00:22:50.920 } 00:22:50.920 ] 00:22:50.920 }' 00:22:50.920 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.920 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.180 [2024-12-15 13:02:58.838895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:51.180 [2024-12-15 13:02:58.838943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021753 ] 00:22:51.180 [2024-12-15 13:02:58.912469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.180 [2024-12-15 13:02:58.934491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.180 [2024-12-15 13:02:59.083137] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.118 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.118 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.118 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:52.118 Running I/O for 10 seconds... 00:22:54.003 5184.00 IOPS, 20.25 MiB/s [2024-12-15T12:03:02.846Z] 5408.50 IOPS, 21.13 MiB/s [2024-12-15T12:03:03.785Z] 5510.33 IOPS, 21.52 MiB/s [2024-12-15T12:03:05.164Z] 5531.50 IOPS, 21.61 MiB/s [2024-12-15T12:03:06.101Z] 5553.20 IOPS, 21.69 MiB/s [2024-12-15T12:03:07.038Z] 5578.50 IOPS, 21.79 MiB/s [2024-12-15T12:03:07.974Z] 5591.71 IOPS, 21.84 MiB/s [2024-12-15T12:03:08.942Z] 5606.75 IOPS, 21.90 MiB/s [2024-12-15T12:03:09.925Z] 5612.33 IOPS, 21.92 MiB/s [2024-12-15T12:03:09.925Z] 5618.90 IOPS, 21.95 MiB/s 00:23:02.018 Latency(us) 00:23:02.018 [2024-12-15T12:03:09.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.018 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:02.018 Verification LBA range: start 0x0 length 0x2000 00:23:02.018 TLSTESTn1 : 10.02 5622.21 21.96 0.00 0.00 22731.79 4462.69 39696.09 00:23:02.018 [2024-12-15T12:03:09.925Z] =================================================================================================================== 00:23:02.018 [2024-12-15T12:03:09.925Z] Total : 5622.21 21.96 0.00 0.00 22731.79 4462.69 39696.09 00:23:02.018 { 00:23:02.018 "results": [ 00:23:02.018 { 00:23:02.018 "job": "TLSTESTn1", 00:23:02.018 "core_mask": "0x4", 00:23:02.018 "workload": "verify", 00:23:02.018 "status": "finished", 00:23:02.018 "verify_range": { 00:23:02.018 "start": 0, 00:23:02.018 "length": 8192 00:23:02.018 }, 00:23:02.018 "queue_depth": 128, 00:23:02.018 "io_size": 4096, 00:23:02.018 "runtime": 10.016702, 00:23:02.018 "iops": 5622.209785216731, 00:23:02.018 "mibps": 21.961756973502855, 00:23:02.018 "io_failed": 0, 00:23:02.018 "io_timeout": 0, 00:23:02.018 "avg_latency_us": 22731.790473755238, 00:23:02.018 "min_latency_us": 4462.689523809524, 00:23:02.018 "max_latency_us": 39696.09142857143 00:23:02.018 } 00:23:02.018 ], 00:23:02.018 "core_count": 1 00:23:02.018 } 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1021753 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021753 ']' 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021753 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.018 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021753 00:23:02.019 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:02.019 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:02.019 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021753' 00:23:02.019 killing process with pid 1021753 00:23:02.019 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021753 00:23:02.019 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.019 00:23:02.019 Latency(us) 00:23:02.019 [2024-12-15T12:03:09.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.019 [2024-12-15T12:03:09.926Z] =================================================================================================================== 00:23:02.019 [2024-12-15T12:03:09.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.019 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021753 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1021628 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021628 ']' 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021628 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021628 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021628' 00:23:02.278 killing process with pid 1021628 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021628 00:23:02.278 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021628 00:23:02.537 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024071 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024071 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024071 ']' 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.538 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.538 [2024-12-15 13:03:10.317328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:02.538 [2024-12-15 13:03:10.317377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.538 [2024-12-15 13:03:10.395854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.538 [2024-12-15 13:03:10.415271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.538 [2024-12-15 13:03:10.415310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.538 [2024-12-15 13:03:10.415318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.538 [2024-12-15 13:03:10.415324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.538 [2024-12-15 13:03:10.415329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.538 [2024-12-15 13:03:10.415848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vIBqfv6WTu 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vIBqfv6WTu 00:23:02.797 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.056 [2024-12-15 13:03:10.735741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.056 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.431 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.431 [2024-12-15 13:03:11.128735] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.431 [2024-12-15 13:03:11.128940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.431 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.690 malloc0 00:23:03.690 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.690 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:23:03.949 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.208 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:04.208 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1024405 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1024405 /var/tmp/bdevperf.sock 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024405 ']' 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.209 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.209 [2024-12-15 13:03:11.970935] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:04.209 [2024-12-15 13:03:11.970986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024405 ] 00:23:04.209 [2024-12-15 13:03:12.047570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.209 [2024-12-15 13:03:12.069322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.468 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.468 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.468 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:23:04.468 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:04.727 [2024-12-15 13:03:12.524021] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.727 nvme0n1 00:23:04.727 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.986 Running I/O for 1 seconds... 00:23:05.922 5383.00 IOPS, 21.03 MiB/s 00:23:05.922 Latency(us) 00:23:05.922 [2024-12-15T12:03:13.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.922 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:05.922 Verification LBA range: start 0x0 length 0x2000 00:23:05.922 nvme0n1 : 1.01 5443.28 21.26 0.00 0.00 23361.82 5055.63 31706.94 00:23:05.922 [2024-12-15T12:03:13.829Z] =================================================================================================================== 00:23:05.922 [2024-12-15T12:03:13.829Z] Total : 5443.28 21.26 0.00 0.00 23361.82 5055.63 31706.94 00:23:05.922 { 00:23:05.922 "results": [ 00:23:05.922 { 00:23:05.922 "job": "nvme0n1", 00:23:05.922 "core_mask": "0x2", 00:23:05.922 "workload": "verify", 00:23:05.922 "status": "finished", 00:23:05.922 "verify_range": { 00:23:05.922 "start": 0, 00:23:05.922 "length": 8192 00:23:05.922 }, 00:23:05.922 "queue_depth": 128, 00:23:05.922 "io_size": 4096, 00:23:05.922 "runtime": 1.012624, 00:23:05.922 "iops": 5443.283982998625, 00:23:05.922 "mibps": 21.26282805858838, 00:23:05.922 "io_failed": 0, 00:23:05.922 "io_timeout": 0, 00:23:05.922 "avg_latency_us": 23361.815832469416, 00:23:05.922 "min_latency_us": 5055.634285714285, 00:23:05.922 "max_latency_us": 31706.94095238095 00:23:05.922 } 00:23:05.922 ], 00:23:05.922 "core_count": 1 00:23:05.922 } 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1024405 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024405 ']' 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024405 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024405 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024405' 00:23:05.922 killing process with pid 1024405 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024405 00:23:05.922 Received shutdown signal, test time was about 1.000000 seconds 00:23:05.922 00:23:05.922 Latency(us) 00:23:05.922 [2024-12-15T12:03:13.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.922 [2024-12-15T12:03:13.829Z] =================================================================================================================== 00:23:05.922 [2024-12-15T12:03:13.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.922 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024405 00:23:06.181 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1024071 00:23:06.181 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024071 ']' 00:23:06.182 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024071 00:23:06.182 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.182 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.182 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024071 00:23:06.182 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.182 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.182 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024071' 00:23:06.182 killing process with pid 1024071 00:23:06.182 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024071 00:23:06.182 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024071 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024778 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024778 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024778 ']' 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.441 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.441 [2024-12-15 13:03:14.224831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:06.441 [2024-12-15 13:03:14.224901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.441 [2024-12-15 13:03:14.301434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.441 [2024-12-15 13:03:14.318655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.441 [2024-12-15 13:03:14.318691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.441 [2024-12-15 13:03:14.318698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.441 [2024-12-15 13:03:14.318704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.441 [2024-12-15 13:03:14.318709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.441 [2024-12-15 13:03:14.319248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.700 [2024-12-15 13:03:14.457737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.700 malloc0 00:23:06.700 [2024-12-15 13:03:14.485795] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.700 [2024-12-15 13:03:14.486001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1024797 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1024797 /var/tmp/bdevperf.sock 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024797 ']' 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.700 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.700 [2024-12-15 13:03:14.560915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:06.700 [2024-12-15 13:03:14.560955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024797 ] 00:23:06.960 [2024-12-15 13:03:14.635566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.960 [2024-12-15 13:03:14.657296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.960 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.960 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.960 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vIBqfv6WTu 00:23:07.219 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:07.219 [2024-12-15 13:03:15.112301] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.478 nvme0n1 00:23:07.478 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.478 Running I/O for 1 seconds... 00:23:08.415 5517.00 IOPS, 21.55 MiB/s 00:23:08.415 Latency(us) 00:23:08.415 [2024-12-15T12:03:16.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.415 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:08.415 Verification LBA range: start 0x0 length 0x2000 00:23:08.415 nvme0n1 : 1.01 5578.63 21.79 0.00 0.00 22800.64 4805.97 20597.03 00:23:08.415 [2024-12-15T12:03:16.322Z] =================================================================================================================== 00:23:08.415 [2024-12-15T12:03:16.322Z] Total : 5578.63 21.79 0.00 0.00 22800.64 4805.97 20597.03 00:23:08.415 { 00:23:08.415 "results": [ 00:23:08.415 { 00:23:08.415 "job": "nvme0n1", 00:23:08.415 "core_mask": "0x2", 00:23:08.415 "workload": "verify", 00:23:08.415 "status": "finished", 00:23:08.415 "verify_range": { 00:23:08.415 "start": 0, 00:23:08.415 "length": 8192 00:23:08.415 }, 00:23:08.415 "queue_depth": 128, 00:23:08.415 "io_size": 4096, 00:23:08.415 "runtime": 1.012077, 00:23:08.415 "iops": 5578.626922655095, 00:23:08.415 "mibps": 21.791511416621464, 00:23:08.415 "io_failed": 0, 00:23:08.415 "io_timeout": 0, 00:23:08.415 "avg_latency_us": 22800.64243982255, 00:23:08.415 "min_latency_us": 4805.973333333333, 00:23:08.415 "max_latency_us": 20597.02857142857 00:23:08.415 } 00:23:08.415 ], 00:23:08.415 "core_count": 1 00:23:08.415 } 00:23:08.674 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:08.674 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.674 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.674 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.674 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:08.674 "subsystems": [ 00:23:08.674 { 00:23:08.674 "subsystem": "keyring", 00:23:08.674 "config": [ 00:23:08.674 { 00:23:08.674 "method": "keyring_file_add_key", 00:23:08.674 "params": { 00:23:08.674 "name": "key0", 00:23:08.674 "path": "/tmp/tmp.vIBqfv6WTu" 00:23:08.674 } 00:23:08.674 } 00:23:08.674 ] 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "subsystem": "iobuf", 00:23:08.674 "config": [ 00:23:08.674 { 00:23:08.674 "method": "iobuf_set_options", 00:23:08.674 "params": { 00:23:08.674 "small_pool_count": 8192, 00:23:08.674 "large_pool_count": 1024, 00:23:08.674 "small_bufsize": 8192, 00:23:08.674 "large_bufsize": 135168, 00:23:08.674 "enable_numa": false 00:23:08.674 } 00:23:08.674 } 00:23:08.674 ] 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "subsystem": "sock", 00:23:08.674 "config": [ 00:23:08.674 { 00:23:08.674 "method": "sock_set_default_impl", 00:23:08.674 "params": { 00:23:08.674 "impl_name": "posix" 00:23:08.674 } 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "method": "sock_impl_set_options", 00:23:08.674 "params": { 00:23:08.674 "impl_name": "ssl", 00:23:08.674 "recv_buf_size": 4096, 00:23:08.674 "send_buf_size": 4096, 00:23:08.674 "enable_recv_pipe": true, 00:23:08.674 "enable_quickack": false, 00:23:08.674 "enable_placement_id": 0, 00:23:08.674 "enable_zerocopy_send_server": true, 00:23:08.674 "enable_zerocopy_send_client": false, 00:23:08.674 "zerocopy_threshold": 0, 00:23:08.674 "tls_version": 0, 00:23:08.674 "enable_ktls": false 00:23:08.674 } 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "method": "sock_impl_set_options", 00:23:08.674 "params": { 00:23:08.674 "impl_name": "posix", 00:23:08.674 "recv_buf_size": 2097152, 00:23:08.674 "send_buf_size": 2097152, 00:23:08.674 "enable_recv_pipe": true, 00:23:08.674 "enable_quickack": false, 00:23:08.674 "enable_placement_id": 0, 00:23:08.674 "enable_zerocopy_send_server": true, 00:23:08.674 "enable_zerocopy_send_client": false, 00:23:08.674 "zerocopy_threshold": 0, 00:23:08.674 "tls_version": 0, 00:23:08.674 "enable_ktls": false 00:23:08.674 } 00:23:08.674 } 00:23:08.674 ] 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "subsystem": "vmd", 00:23:08.674 "config": [] 00:23:08.674 }, 00:23:08.674 { 00:23:08.674 "subsystem": "accel", 00:23:08.675 "config": [ 00:23:08.675 { 00:23:08.675 "method": "accel_set_options", 00:23:08.675 "params": { 00:23:08.675 "small_cache_size": 128, 00:23:08.675 "large_cache_size": 16, 00:23:08.675 "task_count": 2048, 00:23:08.675 "sequence_count": 2048, 00:23:08.675 "buf_count": 2048 00:23:08.675 } 00:23:08.675 } 00:23:08.675 ] 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "subsystem": "bdev", 00:23:08.675 "config": [ 00:23:08.675 { 00:23:08.675 "method": "bdev_set_options", 00:23:08.675 "params": { 00:23:08.675 "bdev_io_pool_size": 65535, 00:23:08.675 "bdev_io_cache_size": 256, 00:23:08.675 "bdev_auto_examine": true, 00:23:08.675 "iobuf_small_cache_size": 128, 00:23:08.675 "iobuf_large_cache_size": 16 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_raid_set_options", 00:23:08.675 "params": { 00:23:08.675 "process_window_size_kb": 1024, 00:23:08.675 "process_max_bandwidth_mb_sec": 0 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_iscsi_set_options", 00:23:08.675 "params": { 00:23:08.675 "timeout_sec": 30 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_nvme_set_options", 00:23:08.675 "params": { 00:23:08.675 "action_on_timeout": "none", 00:23:08.675 "timeout_us": 0, 00:23:08.675 "timeout_admin_us": 0, 00:23:08.675 "keep_alive_timeout_ms": 10000, 00:23:08.675 "arbitration_burst": 0, 00:23:08.675 "low_priority_weight": 0, 00:23:08.675 "medium_priority_weight": 0, 00:23:08.675 "high_priority_weight": 0, 00:23:08.675 "nvme_adminq_poll_period_us": 10000, 00:23:08.675 "nvme_ioq_poll_period_us": 0, 00:23:08.675 "io_queue_requests": 0, 00:23:08.675 "delay_cmd_submit": true, 00:23:08.675 "transport_retry_count": 4, 00:23:08.675 "bdev_retry_count": 3, 00:23:08.675 "transport_ack_timeout": 0, 00:23:08.675 "ctrlr_loss_timeout_sec": 0, 00:23:08.675 "reconnect_delay_sec": 0, 00:23:08.675 "fast_io_fail_timeout_sec": 0, 00:23:08.675 "disable_auto_failback": false, 00:23:08.675 "generate_uuids": false, 00:23:08.675 "transport_tos": 0, 00:23:08.675 "nvme_error_stat": false, 00:23:08.675 "rdma_srq_size": 0, 00:23:08.675 "io_path_stat": false, 00:23:08.675 "allow_accel_sequence": false, 00:23:08.675 "rdma_max_cq_size": 0, 00:23:08.675 "rdma_cm_event_timeout_ms": 0, 00:23:08.675 "dhchap_digests": [ 00:23:08.675 "sha256", 00:23:08.675 "sha384", 00:23:08.675 "sha512" 00:23:08.675 ], 00:23:08.675 "dhchap_dhgroups": [ 00:23:08.675 "null", 00:23:08.675 "ffdhe2048", 00:23:08.675 "ffdhe3072", 00:23:08.675 "ffdhe4096", 00:23:08.675 "ffdhe6144", 00:23:08.675 "ffdhe8192" 00:23:08.675 ], 00:23:08.675 "rdma_umr_per_io": false 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_nvme_set_hotplug", 00:23:08.675 "params": { 00:23:08.675 "period_us": 100000, 00:23:08.675 "enable": false 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_malloc_create", 00:23:08.675 "params": { 00:23:08.675 "name": "malloc0", 00:23:08.675 "num_blocks": 8192, 00:23:08.675 "block_size": 4096, 00:23:08.675 "physical_block_size": 4096, 00:23:08.675 "uuid": "6ca7ee8a-ea03-4cce-8b5e-7b5ed813210d", 00:23:08.675 "optimal_io_boundary": 0, 00:23:08.675 "md_size": 0, 00:23:08.675 "dif_type": 0, 00:23:08.675 "dif_is_head_of_md": false, 00:23:08.675 "dif_pi_format": 0 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "bdev_wait_for_examine" 00:23:08.675 } 00:23:08.675 ] 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "subsystem": "nbd", 00:23:08.675 "config": [] 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "subsystem": "scheduler", 00:23:08.675 "config": [ 00:23:08.675 { 00:23:08.675 "method": "framework_set_scheduler", 00:23:08.675 "params": { 00:23:08.675 "name": "static" 00:23:08.675 } 00:23:08.675 } 00:23:08.675 ] 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "subsystem": "nvmf", 00:23:08.675 "config": [ 00:23:08.675 { 00:23:08.675 "method": "nvmf_set_config", 00:23:08.675 "params": { 00:23:08.675 "discovery_filter": "match_any", 00:23:08.675 "admin_cmd_passthru": { 00:23:08.675 "identify_ctrlr": false 00:23:08.675 }, 00:23:08.675 "dhchap_digests": [ 00:23:08.675 "sha256", 00:23:08.675 "sha384", 00:23:08.675 "sha512" 00:23:08.675 ], 00:23:08.675 "dhchap_dhgroups": [ 00:23:08.675 "null", 00:23:08.675 "ffdhe2048", 00:23:08.675 "ffdhe3072", 00:23:08.675 "ffdhe4096", 00:23:08.675 "ffdhe6144", 00:23:08.675 "ffdhe8192" 00:23:08.675 ] 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_set_max_subsystems", 00:23:08.675 "params": { 00:23:08.675 "max_subsystems": 1024 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_set_crdt", 00:23:08.675 "params": { 00:23:08.675 "crdt1": 0, 00:23:08.675 "crdt2": 0, 00:23:08.675 "crdt3": 0 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_create_transport", 00:23:08.675 "params": { 00:23:08.675 "trtype": "TCP", 00:23:08.675 "max_queue_depth": 128, 00:23:08.675 "max_io_qpairs_per_ctrlr": 127, 00:23:08.675 "in_capsule_data_size": 4096, 00:23:08.675 "max_io_size": 131072, 00:23:08.675 "io_unit_size": 131072, 00:23:08.675 "max_aq_depth": 128, 00:23:08.675 "num_shared_buffers": 511, 00:23:08.675 "buf_cache_size": 4294967295, 00:23:08.675 "dif_insert_or_strip": false, 00:23:08.675 "zcopy": false, 00:23:08.675 "c2h_success": false, 00:23:08.675 "sock_priority": 0, 00:23:08.675 "abort_timeout_sec": 1, 00:23:08.675 "ack_timeout": 0, 00:23:08.675 "data_wr_pool_size": 0 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_create_subsystem", 00:23:08.675 "params": { 00:23:08.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.675 "allow_any_host": false, 00:23:08.675 "serial_number": "00000000000000000000", 00:23:08.675 "model_number": "SPDK bdev Controller", 00:23:08.675 "max_namespaces": 32, 00:23:08.675 "min_cntlid": 1, 00:23:08.675 "max_cntlid": 65519, 00:23:08.675 "ana_reporting": false 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_subsystem_add_host", 00:23:08.675 "params": { 00:23:08.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.675 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.675 "psk": "key0" 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_subsystem_add_ns", 00:23:08.675 "params": { 00:23:08.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.675 "namespace": { 00:23:08.675 "nsid": 1, 00:23:08.675 "bdev_name": "malloc0", 00:23:08.675 "nguid": "6CA7EE8AEA034CCE8B5E7B5ED813210D", 00:23:08.675 "uuid": "6ca7ee8a-ea03-4cce-8b5e-7b5ed813210d", 00:23:08.675 "no_auto_visible": false 00:23:08.675 } 00:23:08.675 } 00:23:08.675 }, 00:23:08.675 { 00:23:08.675 "method": "nvmf_subsystem_add_listener", 00:23:08.675 "params": { 00:23:08.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.675 "listen_address": { 00:23:08.675 "trtype": "TCP", 00:23:08.675 "adrfam": "IPv4", 00:23:08.675 "traddr": "10.0.0.2", 00:23:08.675 "trsvcid": "4420" 00:23:08.676 }, 00:23:08.676 "secure_channel": false, 00:23:08.676 "sock_impl": "ssl" 00:23:08.676 } 00:23:08.676 } 00:23:08.676 ] 00:23:08.676 } 00:23:08.676 ] 00:23:08.676 }' 00:23:08.676 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:08.935 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:08.935 "subsystems": [ 00:23:08.935 { 00:23:08.935 "subsystem": "keyring", 00:23:08.935 "config": [ 00:23:08.935 { 00:23:08.935 "method": "keyring_file_add_key", 00:23:08.935 "params": { 00:23:08.935 "name": "key0", 00:23:08.935 "path": "/tmp/tmp.vIBqfv6WTu" 00:23:08.935 } 00:23:08.935 } 00:23:08.935 ] 00:23:08.935 }, 00:23:08.935 { 00:23:08.935 "subsystem": "iobuf", 00:23:08.935 "config": [ 00:23:08.935 { 00:23:08.935 "method": "iobuf_set_options", 00:23:08.935 "params": { 00:23:08.935 "small_pool_count": 8192, 00:23:08.935 "large_pool_count": 1024, 00:23:08.935 "small_bufsize": 8192, 00:23:08.935 "large_bufsize": 135168, 00:23:08.935 "enable_numa": false 00:23:08.935 } 00:23:08.935 } 00:23:08.935 ] 00:23:08.935 }, 00:23:08.935 { 00:23:08.935 "subsystem": "sock", 00:23:08.935 "config": [ 00:23:08.935 { 00:23:08.935 "method": "sock_set_default_impl", 00:23:08.935 "params": { 00:23:08.935 "impl_name": "posix" 00:23:08.935 } 00:23:08.935 }, 00:23:08.935 { 00:23:08.935 "method": "sock_impl_set_options", 00:23:08.935 "params": { 00:23:08.935 "impl_name": "ssl", 00:23:08.935 "recv_buf_size": 4096, 00:23:08.935 "send_buf_size": 4096, 00:23:08.935 "enable_recv_pipe": true, 00:23:08.935 "enable_quickack": false, 00:23:08.935 "enable_placement_id": 0, 00:23:08.935 "enable_zerocopy_send_server": true, 00:23:08.935 "enable_zerocopy_send_client": false, 00:23:08.935 "zerocopy_threshold": 0, 00:23:08.935 "tls_version": 0, 00:23:08.935 "enable_ktls": false 00:23:08.935 } 00:23:08.935 }, 00:23:08.935 { 00:23:08.936 "method": "sock_impl_set_options", 00:23:08.936 "params": { 00:23:08.936 "impl_name": "posix", 00:23:08.936 "recv_buf_size": 2097152, 00:23:08.936 "send_buf_size": 2097152, 00:23:08.936 "enable_recv_pipe": true, 00:23:08.936 "enable_quickack": false, 00:23:08.936 "enable_placement_id": 0, 00:23:08.936 "enable_zerocopy_send_server": true, 00:23:08.936 "enable_zerocopy_send_client": false, 00:23:08.936 "zerocopy_threshold": 0, 00:23:08.936 "tls_version": 0, 00:23:08.936 "enable_ktls": false 00:23:08.936 } 00:23:08.936 } 00:23:08.936 ] 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "subsystem": "vmd", 00:23:08.936 "config": [] 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "subsystem": "accel", 00:23:08.936 "config": [ 00:23:08.936 { 00:23:08.936 "method": "accel_set_options", 00:23:08.936 "params": { 00:23:08.936 "small_cache_size": 128, 00:23:08.936 "large_cache_size": 16, 00:23:08.936 "task_count": 2048, 00:23:08.936 "sequence_count": 2048, 00:23:08.936 "buf_count": 2048 00:23:08.936 } 00:23:08.936 } 00:23:08.936 ] 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "subsystem": "bdev", 00:23:08.936 "config": [ 00:23:08.936 { 00:23:08.936 "method": "bdev_set_options", 00:23:08.936 "params": { 00:23:08.936 "bdev_io_pool_size": 65535, 00:23:08.936 "bdev_io_cache_size": 256, 00:23:08.936 "bdev_auto_examine": true, 00:23:08.936 "iobuf_small_cache_size": 128, 00:23:08.936 "iobuf_large_cache_size": 16 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_raid_set_options", 00:23:08.936 "params": { 00:23:08.936 "process_window_size_kb": 1024, 00:23:08.936 "process_max_bandwidth_mb_sec": 0 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_iscsi_set_options", 00:23:08.936 "params": { 00:23:08.936 "timeout_sec": 30 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_nvme_set_options", 00:23:08.936 "params": { 00:23:08.936 "action_on_timeout": "none", 00:23:08.936 "timeout_us": 0, 00:23:08.936 "timeout_admin_us": 0, 00:23:08.936 "keep_alive_timeout_ms": 10000, 00:23:08.936 "arbitration_burst": 0, 00:23:08.936 "low_priority_weight": 0, 00:23:08.936 "medium_priority_weight": 0, 00:23:08.936 "high_priority_weight": 0, 00:23:08.936 "nvme_adminq_poll_period_us": 10000, 00:23:08.936 "nvme_ioq_poll_period_us": 0, 00:23:08.936 "io_queue_requests": 512, 00:23:08.936 "delay_cmd_submit": true, 00:23:08.936 "transport_retry_count": 4, 00:23:08.936 "bdev_retry_count": 3, 00:23:08.936 "transport_ack_timeout": 0, 00:23:08.936 "ctrlr_loss_timeout_sec": 0, 00:23:08.936 "reconnect_delay_sec": 0, 00:23:08.936 "fast_io_fail_timeout_sec": 0, 00:23:08.936 "disable_auto_failback": false, 00:23:08.936 "generate_uuids": false, 00:23:08.936 "transport_tos": 0, 00:23:08.936 "nvme_error_stat": false, 00:23:08.936 "rdma_srq_size": 0, 00:23:08.936 "io_path_stat": false, 00:23:08.936 "allow_accel_sequence": false, 00:23:08.936 "rdma_max_cq_size": 0, 00:23:08.936 "rdma_cm_event_timeout_ms": 0, 00:23:08.936 "dhchap_digests": [ 00:23:08.936 "sha256", 00:23:08.936 "sha384", 00:23:08.936 "sha512" 00:23:08.936 ], 00:23:08.936 "dhchap_dhgroups": [ 00:23:08.936 "null", 00:23:08.936 "ffdhe2048", 00:23:08.936 "ffdhe3072", 00:23:08.936 "ffdhe4096", 00:23:08.936 "ffdhe6144", 00:23:08.936 "ffdhe8192" 00:23:08.936 ], 00:23:08.936 "rdma_umr_per_io": false 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_nvme_attach_controller", 00:23:08.936 "params": { 00:23:08.936 "name": "nvme0", 00:23:08.936 "trtype": "TCP", 00:23:08.936 "adrfam": "IPv4", 00:23:08.936 "traddr": "10.0.0.2", 00:23:08.936 "trsvcid": "4420", 00:23:08.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.936 "prchk_reftag": false, 00:23:08.936 "prchk_guard": false, 00:23:08.936 "ctrlr_loss_timeout_sec": 0, 00:23:08.936 "reconnect_delay_sec": 0, 00:23:08.936 "fast_io_fail_timeout_sec": 0, 00:23:08.936 "psk": "key0", 00:23:08.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.936 "hdgst": false, 00:23:08.936 "ddgst": false, 00:23:08.936 "multipath": "multipath" 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_nvme_set_hotplug", 00:23:08.936 "params": { 00:23:08.936 "period_us": 100000, 00:23:08.936 "enable": false 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_enable_histogram", 00:23:08.936 "params": { 00:23:08.936 "name": "nvme0n1", 00:23:08.936 "enable": true 00:23:08.936 } 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "method": "bdev_wait_for_examine" 00:23:08.936 } 00:23:08.936 ] 00:23:08.936 }, 00:23:08.936 { 00:23:08.936 "subsystem": "nbd", 00:23:08.936 "config": [] 00:23:08.936 } 00:23:08.936 ] 00:23:08.936 }' 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1024797 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024797 ']' 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024797 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024797 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024797' 00:23:08.936 killing process with pid 1024797 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024797 00:23:08.936 Received shutdown signal, test time was about 1.000000 seconds 00:23:08.936 00:23:08.936 Latency(us) 00:23:08.936 [2024-12-15T12:03:16.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.936 [2024-12-15T12:03:16.843Z] =================================================================================================================== 00:23:08.936 [2024-12-15T12:03:16.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.936 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024797 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1024778 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024778 ']' 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024778 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024778 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024778' 00:23:09.196 killing process with pid 1024778 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024778 00:23:09.196 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024778 00:23:09.455 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:09.456 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.456 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.456 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:09.456 "subsystems": [ 00:23:09.456 { 00:23:09.456 "subsystem": "keyring", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "keyring_file_add_key", 00:23:09.456 "params": { 00:23:09.456 "name": "key0", 00:23:09.456 "path": "/tmp/tmp.vIBqfv6WTu" 00:23:09.456 } 00:23:09.456 } 00:23:09.456 ] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "iobuf", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "iobuf_set_options", 00:23:09.456 "params": { 00:23:09.456 "small_pool_count": 8192, 00:23:09.456 "large_pool_count": 1024, 00:23:09.456 "small_bufsize": 8192, 00:23:09.456 "large_bufsize": 135168, 00:23:09.456 "enable_numa": false 00:23:09.456 } 00:23:09.456 } 00:23:09.456 ] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "sock", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "sock_set_default_impl", 00:23:09.456 "params": { 00:23:09.456 "impl_name": "posix" 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "sock_impl_set_options", 00:23:09.456 "params": { 00:23:09.456 "impl_name": "ssl", 00:23:09.456 "recv_buf_size": 4096, 00:23:09.456 "send_buf_size": 4096, 00:23:09.456 "enable_recv_pipe": true, 00:23:09.456 "enable_quickack": false, 00:23:09.456 "enable_placement_id": 0, 00:23:09.456 "enable_zerocopy_send_server": true, 00:23:09.456 "enable_zerocopy_send_client": false, 00:23:09.456 "zerocopy_threshold": 0, 00:23:09.456 "tls_version": 0, 00:23:09.456 "enable_ktls": false 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "sock_impl_set_options", 00:23:09.456 "params": { 00:23:09.456 "impl_name": "posix", 00:23:09.456 "recv_buf_size": 2097152, 00:23:09.456 "send_buf_size": 2097152, 00:23:09.456 "enable_recv_pipe": true, 00:23:09.456 "enable_quickack": false, 00:23:09.456 "enable_placement_id": 0, 00:23:09.456 "enable_zerocopy_send_server": true, 00:23:09.456 "enable_zerocopy_send_client": false, 00:23:09.456 "zerocopy_threshold": 0, 00:23:09.456 "tls_version": 0, 00:23:09.456 "enable_ktls": false 00:23:09.456 } 00:23:09.456 } 00:23:09.456 ] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "vmd", 00:23:09.456 "config": [] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "accel", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "accel_set_options", 00:23:09.456 "params": { 00:23:09.456 "small_cache_size": 128, 00:23:09.456 "large_cache_size": 16, 00:23:09.456 "task_count": 2048, 00:23:09.456 "sequence_count": 2048, 00:23:09.456 "buf_count": 2048 00:23:09.456 } 00:23:09.456 } 00:23:09.456 ] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "bdev", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "bdev_set_options", 00:23:09.456 "params": { 00:23:09.456 "bdev_io_pool_size": 65535, 00:23:09.456 "bdev_io_cache_size": 256, 00:23:09.456 "bdev_auto_examine": true, 00:23:09.456 "iobuf_small_cache_size": 128, 00:23:09.456 "iobuf_large_cache_size": 16 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_raid_set_options", 00:23:09.456 "params": { 00:23:09.456 "process_window_size_kb": 1024, 00:23:09.456 "process_max_bandwidth_mb_sec": 0 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_iscsi_set_options", 00:23:09.456 "params": { 00:23:09.456 "timeout_sec": 30 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_nvme_set_options", 00:23:09.456 "params": { 00:23:09.456 "action_on_timeout": "none", 00:23:09.456 "timeout_us": 0, 00:23:09.456 "timeout_admin_us": 0, 00:23:09.456 "keep_alive_timeout_ms": 10000, 00:23:09.456 "arbitration_burst": 0, 00:23:09.456 "low_priority_weight": 0, 00:23:09.456 "medium_priority_weight": 0, 00:23:09.456 "high_priority_weight": 0, 00:23:09.456 "nvme_adminq_poll_period_us": 10000, 00:23:09.456 "nvme_ioq_poll_period_us": 0, 00:23:09.456 "io_queue_requests": 0, 00:23:09.456 "delay_cmd_submit": true, 00:23:09.456 "transport_retry_count": 4, 00:23:09.456 "bdev_retry_count": 3, 00:23:09.456 "transport_ack_timeout": 0, 00:23:09.456 "ctrlr_loss_timeout_sec": 0, 00:23:09.456 "reconnect_delay_sec": 0, 00:23:09.456 "fast_io_fail_timeout_sec": 0, 00:23:09.456 "disable_auto_failback": false, 00:23:09.456 "generate_uuids": false, 00:23:09.456 "transport_tos": 0, 00:23:09.456 "nvme_error_stat": false, 00:23:09.456 "rdma_srq_size": 0, 00:23:09.456 "io_path_stat": false, 00:23:09.456 "allow_accel_sequence": false, 00:23:09.456 "rdma_max_cq_size": 0, 00:23:09.456 "rdma_cm_event_timeout_ms": 0, 00:23:09.456 "dhchap_digests": [ 00:23:09.456 "sha256", 00:23:09.456 "sha384", 00:23:09.456 "sha512" 00:23:09.456 ], 00:23:09.456 "dhchap_dhgroups": [ 00:23:09.456 "null", 00:23:09.456 "ffdhe2048", 00:23:09.456 "ffdhe3072", 00:23:09.456 "ffdhe4096", 00:23:09.456 "ffdhe6144", 00:23:09.456 "ffdhe8192" 00:23:09.456 ], 00:23:09.456 "rdma_umr_per_io": false 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_nvme_set_hotplug", 00:23:09.456 "params": { 00:23:09.456 "period_us": 100000, 00:23:09.456 "enable": false 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_malloc_create", 00:23:09.456 "params": { 00:23:09.456 "name": "malloc0", 00:23:09.456 "num_blocks": 8192, 00:23:09.456 "block_size": 4096, 00:23:09.456 "physical_block_size": 4096, 00:23:09.456 "uuid": "6ca7ee8a-ea03-4cce-8b5e-7b5ed813210d", 00:23:09.456 "optimal_io_boundary": 0, 00:23:09.456 "md_size": 0, 00:23:09.456 "dif_type": 0, 00:23:09.456 "dif_is_head_of_md": false, 00:23:09.456 "dif_pi_format": 0 00:23:09.456 } 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "method": "bdev_wait_for_examine" 00:23:09.456 } 00:23:09.456 ] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "nbd", 00:23:09.456 "config": [] 00:23:09.456 }, 00:23:09.456 { 00:23:09.456 "subsystem": "scheduler", 00:23:09.456 "config": [ 00:23:09.456 { 00:23:09.456 "method": "framework_set_scheduler", 00:23:09.456 "params": { 00:23:09.456 "name": "static" 00:23:09.456 } 00:23:09.456 } 00:23:09.456 ] 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "subsystem": "nvmf", 00:23:09.457 "config": [ 00:23:09.457 { 00:23:09.457 "method": "nvmf_set_config", 00:23:09.457 "params": { 00:23:09.457 "discovery_filter": "match_any", 00:23:09.457 "admin_cmd_passthru": { 00:23:09.457 "identify_ctrlr": false 00:23:09.457 }, 00:23:09.457 "dhchap_digests": [ 00:23:09.457 "sha256", 00:23:09.457 "sha384", 00:23:09.457 "sha512" 00:23:09.457 ], 00:23:09.457 "dhchap_dhgroups": [ 00:23:09.457 "null", 00:23:09.457 "ffdhe2048", 00:23:09.457 "ffdhe3072", 00:23:09.457 "ffdhe4096", 00:23:09.457 "ffdhe6144", 00:23:09.457 "ffdhe8192" 00:23:09.457 ] 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_set_max_subsystems", 00:23:09.457 "params": { 00:23:09.457 "max_subsystems": 1024 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_set_crdt", 00:23:09.457 "params": { 00:23:09.457 "crdt1": 0, 00:23:09.457 "crdt2": 0, 00:23:09.457 "crdt3": 0 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_create_transport", 00:23:09.457 "params": { 00:23:09.457 "trtype": "TCP", 00:23:09.457 "max_queue_depth": 128, 00:23:09.457 "max_io_qpairs_per_ctrlr": 127, 00:23:09.457 "in_capsule_data_size": 4096, 00:23:09.457 "max_io_size": 131072, 00:23:09.457 "io_unit_size": 131072, 00:23:09.457 "max_aq_depth": 128, 00:23:09.457 "num_shared_buffers": 511, 00:23:09.457 "buf_cache_size": 4294967295, 00:23:09.457 "dif_insert_or_strip": false, 00:23:09.457 "zcopy": false, 00:23:09.457 "c2h_success": false, 00:23:09.457 "sock_priority": 0, 00:23:09.457 "abort_timeout_sec": 1, 00:23:09.457 "ack_timeout": 0, 00:23:09.457 "data_wr_pool_size": 0 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_create_subsystem", 00:23:09.457 "params": { 00:23:09.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.457 "allow_any_host": false, 00:23:09.457 "serial_number": "00000000000000000000", 00:23:09.457 "model_number": "SPDK bdev Controller", 00:23:09.457 "max_namespaces": 32, 00:23:09.457 "min_cntlid": 1, 00:23:09.457 "max_cntlid": 65519, 00:23:09.457 "ana_reporting": false 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_subsystem_add_host", 00:23:09.457 "params": { 00:23:09.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.457 "host": "nqn.2016-06.io.spdk:host1", 00:23:09.457 "psk": "key0" 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_subsystem_add_ns", 00:23:09.457 "params": { 00:23:09.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.457 "namespace": { 00:23:09.457 "nsid": 1, 00:23:09.457 "bdev_name": "malloc0", 00:23:09.457 "nguid": "6CA7EE8AEA034CCE8B5E7B5ED813210D", 00:23:09.457 "uuid": "6ca7ee8a-ea03-4cce-8b5e-7b5ed813210d", 00:23:09.457 "no_auto_visible": false 00:23:09.457 } 00:23:09.457 } 00:23:09.457 }, 00:23:09.457 { 00:23:09.457 "method": "nvmf_subsystem_add_listener", 00:23:09.457 "params": { 00:23:09.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.457 "listen_address": { 00:23:09.457 "trtype": "TCP", 00:23:09.457 "adrfam": "IPv4", 00:23:09.457 "traddr": "10.0.0.2", 00:23:09.457 "trsvcid": "4420" 00:23:09.457 }, 00:23:09.457 "secure_channel": false, 00:23:09.457 "sock_impl": "ssl" 00:23:09.457 } 00:23:09.457 } 00:23:09.457 ] 00:23:09.457 } 00:23:09.457 ] 00:23:09.457 }' 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025263 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025263 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025263 ']' 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.457 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.457 [2024-12-15 13:03:17.172582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:09.457 [2024-12-15 13:03:17.172627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.457 [2024-12-15 13:03:17.234566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.457 [2024-12-15 13:03:17.254964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.457 [2024-12-15 13:03:17.255004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.457 [2024-12-15 13:03:17.255011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.457 [2024-12-15 13:03:17.255017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.457 [2024-12-15 13:03:17.255021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.457 [2024-12-15 13:03:17.255581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.716 [2024-12-15 13:03:17.462903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.717 [2024-12-15 13:03:17.494918] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.717 [2024-12-15 13:03:17.495116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1025502 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1025502 /var/tmp/bdevperf.sock 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025502 ']' 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.286 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:10.286 "subsystems": [ 00:23:10.286 { 00:23:10.286 "subsystem": "keyring", 00:23:10.286 "config": [ 00:23:10.286 { 00:23:10.286 "method": "keyring_file_add_key", 00:23:10.286 "params": { 00:23:10.286 "name": "key0", 00:23:10.286 "path": "/tmp/tmp.vIBqfv6WTu" 00:23:10.286 } 00:23:10.286 } 00:23:10.286 ] 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "subsystem": "iobuf", 00:23:10.286 "config": [ 00:23:10.286 { 00:23:10.286 "method": "iobuf_set_options", 00:23:10.286 "params": { 00:23:10.286 "small_pool_count": 8192, 00:23:10.286 "large_pool_count": 1024, 00:23:10.286 "small_bufsize": 8192, 00:23:10.286 "large_bufsize": 135168, 00:23:10.286 "enable_numa": false 00:23:10.286 } 00:23:10.286 } 00:23:10.286 ] 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "subsystem": "sock", 00:23:10.286 "config": [ 00:23:10.286 { 00:23:10.286 "method": "sock_set_default_impl", 00:23:10.286 "params": { 00:23:10.286 "impl_name": "posix" 00:23:10.286 } 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "method": "sock_impl_set_options", 00:23:10.286 "params": { 00:23:10.286 "impl_name": "ssl", 00:23:10.286 "recv_buf_size": 4096, 00:23:10.286 "send_buf_size": 4096, 00:23:10.286 "enable_recv_pipe": true, 00:23:10.286 "enable_quickack": false, 00:23:10.286 "enable_placement_id": 0, 00:23:10.286 "enable_zerocopy_send_server": true, 00:23:10.286 "enable_zerocopy_send_client": false, 00:23:10.286 "zerocopy_threshold": 0, 00:23:10.286 "tls_version": 0, 00:23:10.286 "enable_ktls": false 00:23:10.286 } 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "method": "sock_impl_set_options", 00:23:10.286 "params": { 00:23:10.286 "impl_name": "posix", 00:23:10.286 "recv_buf_size": 2097152, 00:23:10.286 "send_buf_size": 2097152, 00:23:10.286 "enable_recv_pipe": true, 00:23:10.286 "enable_quickack": false, 00:23:10.286 "enable_placement_id": 0, 00:23:10.286 "enable_zerocopy_send_server": true, 00:23:10.286 "enable_zerocopy_send_client": false, 00:23:10.286 "zerocopy_threshold": 0, 00:23:10.286 "tls_version": 0, 00:23:10.286 "enable_ktls": false 00:23:10.286 } 00:23:10.286 } 00:23:10.286 ] 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "subsystem": "vmd", 00:23:10.286 "config": [] 00:23:10.286 }, 00:23:10.286 { 00:23:10.286 "subsystem": "accel", 00:23:10.286 "config": [ 00:23:10.286 { 00:23:10.286 "method": "accel_set_options", 00:23:10.286 "params": { 00:23:10.286 "small_cache_size": 128, 00:23:10.287 "large_cache_size": 16, 00:23:10.287 "task_count": 2048, 00:23:10.287 "sequence_count": 2048, 00:23:10.287 "buf_count": 2048 00:23:10.287 } 00:23:10.287 } 00:23:10.287 ] 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "subsystem": "bdev", 00:23:10.287 "config": [ 00:23:10.287 { 00:23:10.287 "method": "bdev_set_options", 00:23:10.287 "params": { 00:23:10.287 "bdev_io_pool_size": 65535, 00:23:10.287 "bdev_io_cache_size": 256, 00:23:10.287 "bdev_auto_examine": true, 00:23:10.287 "iobuf_small_cache_size": 128, 00:23:10.287 "iobuf_large_cache_size": 16 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_raid_set_options", 00:23:10.287 "params": { 00:23:10.287 "process_window_size_kb": 1024, 00:23:10.287 "process_max_bandwidth_mb_sec": 0 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_iscsi_set_options", 00:23:10.287 "params": { 00:23:10.287 "timeout_sec": 30 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_nvme_set_options", 00:23:10.287 "params": { 00:23:10.287 "action_on_timeout": "none", 00:23:10.287 "timeout_us": 0, 00:23:10.287 "timeout_admin_us": 0, 00:23:10.287 "keep_alive_timeout_ms": 10000, 00:23:10.287 "arbitration_burst": 0, 00:23:10.287 "low_priority_weight": 0, 00:23:10.287 "medium_priority_weight": 0, 00:23:10.287 "high_priority_weight": 0, 00:23:10.287 "nvme_adminq_poll_period_us": 10000, 00:23:10.287 "nvme_ioq_poll_period_us": 0, 00:23:10.287 "io_queue_requests": 512, 00:23:10.287 "delay_cmd_submit": true, 00:23:10.287 "transport_retry_count": 4, 00:23:10.287 "bdev_retry_count": 3, 00:23:10.287 "transport_ack_timeout": 0, 00:23:10.287 "ctrlr_loss_timeout_sec": 0, 00:23:10.287 "reconnect_delay_sec": 0, 00:23:10.287 "fast_io_fail_timeout_sec": 0, 00:23:10.287 "disable_auto_failback": false, 00:23:10.287 "generate_uuids": false, 00:23:10.287 "transport_tos": 0, 00:23:10.287 "nvme_error_stat": false, 00:23:10.287 "rdma_srq_size": 0, 00:23:10.287 "io_path_stat": false, 00:23:10.287 "allow_accel_sequence": false, 00:23:10.287 "rdma_max_cq_size": 0, 00:23:10.287 "rdma_cm_event_timeout_ms": 0, 00:23:10.287 "dhchap_digests": [ 00:23:10.287 "sha256", 00:23:10.287 "sha384", 00:23:10.287 "sha512" 00:23:10.287 ], 00:23:10.287 "dhchap_dhgroups": [ 00:23:10.287 "null", 00:23:10.287 "ffdhe2048", 00:23:10.287 "ffdhe3072", 00:23:10.287 "ffdhe4096", 00:23:10.287 "ffdhe6144", 00:23:10.287 "ffdhe8192" 00:23:10.287 ], 00:23:10.287 "rdma_umr_per_io": false 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_nvme_attach_controller", 00:23:10.287 "params": { 00:23:10.287 "name": "nvme0", 00:23:10.287 "trtype": "TCP", 00:23:10.287 "adrfam": "IPv4", 00:23:10.287 "traddr": "10.0.0.2", 00:23:10.287 "trsvcid": "4420", 00:23:10.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.287 "prchk_reftag": false, 00:23:10.287 "prchk_guard": false, 00:23:10.287 "ctrlr_loss_timeout_sec": 0, 00:23:10.287 "reconnect_delay_sec": 0, 00:23:10.287 "fast_io_fail_timeout_sec": 0, 00:23:10.287 "psk": "key0", 00:23:10.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.287 "hdgst": false, 00:23:10.287 "ddgst": false, 00:23:10.287 "multipath": "multipath" 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_nvme_set_hotplug", 00:23:10.287 "params": { 00:23:10.287 "period_us": 100000, 00:23:10.287 "enable": false 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_enable_histogram", 00:23:10.287 "params": { 00:23:10.287 "name": "nvme0n1", 00:23:10.287 "enable": true 00:23:10.287 } 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "method": "bdev_wait_for_examine" 00:23:10.287 } 00:23:10.287 ] 00:23:10.287 }, 00:23:10.287 { 00:23:10.287 "subsystem": "nbd", 00:23:10.287 "config": [] 00:23:10.287 } 00:23:10.287 ] 00:23:10.287 }' 00:23:10.287 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.287 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.287 [2024-12-15 13:03:18.113338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:10.287 [2024-12-15 13:03:18.113386] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025502 ] 00:23:10.287 [2024-12-15 13:03:18.184329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.547 [2024-12-15 13:03:18.206191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.547 [2024-12-15 13:03:18.354638] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.116 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.116 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.116 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.116 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:11.375 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.375 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:11.375 Running I/O for 1 seconds... 00:23:12.753 5441.00 IOPS, 21.25 MiB/s 00:23:12.753 Latency(us) 00:23:12.753 [2024-12-15T12:03:20.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.754 Verification LBA range: start 0x0 length 0x2000 00:23:12.754 nvme0n1 : 1.02 5471.12 21.37 0.00 0.00 23207.47 7333.79 20472.20 00:23:12.754 [2024-12-15T12:03:20.661Z] =================================================================================================================== 00:23:12.754 [2024-12-15T12:03:20.661Z] Total : 5471.12 21.37 0.00 0.00 23207.47 7333.79 20472.20 00:23:12.754 { 00:23:12.754 "results": [ 00:23:12.754 { 00:23:12.754 "job": "nvme0n1", 00:23:12.754 "core_mask": "0x2", 00:23:12.754 "workload": "verify", 00:23:12.754 "status": "finished", 00:23:12.754 "verify_range": { 00:23:12.754 "start": 0, 00:23:12.754 "length": 8192 00:23:12.754 }, 00:23:12.754 "queue_depth": 128, 00:23:12.754 "io_size": 4096, 00:23:12.754 "runtime": 1.017891, 00:23:12.754 "iops": 5471.116259010051, 00:23:12.754 "mibps": 21.371547886758012, 00:23:12.754 "io_failed": 0, 00:23:12.754 "io_timeout": 0, 00:23:12.754 "avg_latency_us": 23207.47109389563, 00:23:12.754 "min_latency_us": 7333.7904761904765, 00:23:12.754 "max_latency_us": 20472.198095238095 00:23:12.754 } 00:23:12.754 ], 00:23:12.754 "core_count": 1 00:23:12.754 } 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:12.754 nvmf_trace.0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1025502 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025502 ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025502 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025502 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025502' 00:23:12.754 killing process with pid 1025502 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025502 00:23:12.754 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.754 00:23:12.754 Latency(us) 00:23:12.754 [2024-12-15T12:03:20.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.754 [2024-12-15T12:03:20.661Z] =================================================================================================================== 00:23:12.754 [2024-12-15T12:03:20.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025502 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.754 rmmod nvme_tcp 00:23:12.754 rmmod nvme_fabrics 00:23:12.754 rmmod nvme_keyring 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1025263 ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1025263 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025263 ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025263 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.754 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025263 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025263' 00:23:13.013 killing process with pid 1025263 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025263 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025263 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.013 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0F8YKkzKau /tmp/tmp.OAY8ArJkcc /tmp/tmp.vIBqfv6WTu 00:23:15.552 00:23:15.552 real 1m19.024s 00:23:15.552 user 2m1.612s 00:23:15.552 sys 0m30.012s 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.552 ************************************ 00:23:15.552 END TEST nvmf_tls 00:23:15.552 ************************************ 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.552 13:03:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:15.552 ************************************ 00:23:15.552 START TEST nvmf_fips 00:23:15.552 ************************************ 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:15.552 * Looking for test storage... 00:23:15.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.552 --rc genhtml_branch_coverage=1 00:23:15.552 --rc genhtml_function_coverage=1 00:23:15.552 --rc genhtml_legend=1 00:23:15.552 --rc geninfo_all_blocks=1 00:23:15.552 --rc geninfo_unexecuted_blocks=1 00:23:15.552 00:23:15.552 ' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.552 --rc genhtml_branch_coverage=1 00:23:15.552 --rc genhtml_function_coverage=1 00:23:15.552 --rc genhtml_legend=1 00:23:15.552 --rc geninfo_all_blocks=1 00:23:15.552 --rc geninfo_unexecuted_blocks=1 00:23:15.552 00:23:15.552 ' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.552 --rc genhtml_branch_coverage=1 00:23:15.552 --rc genhtml_function_coverage=1 00:23:15.552 --rc genhtml_legend=1 00:23:15.552 --rc geninfo_all_blocks=1 00:23:15.552 --rc geninfo_unexecuted_blocks=1 00:23:15.552 00:23:15.552 ' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.552 --rc genhtml_branch_coverage=1 00:23:15.552 --rc genhtml_function_coverage=1 00:23:15.552 --rc genhtml_legend=1 00:23:15.552 --rc geninfo_all_blocks=1 00:23:15.552 --rc geninfo_unexecuted_blocks=1 00:23:15.552 00:23:15.552 ' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.552 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:15.553 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:15.554 Error setting digest 00:23:15.554 407225810D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:15.554 407225810D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.554 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:22.127 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:22.127 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:22.127 Found net devices under 0000:af:00.0: cvl_0_0 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.127 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:22.127 Found net devices under 0000:af:00.1: cvl_0_1 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.128 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:23:22.128 00:23:22.128 --- 10.0.0.2 ping statistics --- 00:23:22.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.128 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:22.128 00:23:22.128 --- 10.0.0.1 ping statistics --- 00:23:22.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.128 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1029447 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1029447 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029447 ']' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.128 [2024-12-15 13:03:29.353460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.128 [2024-12-15 13:03:29.353508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.128 [2024-12-15 13:03:29.433920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.128 [2024-12-15 13:03:29.454638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.128 [2024-12-15 13:03:29.454676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.128 [2024-12-15 13:03:29.454683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.128 [2024-12-15 13:03:29.454690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.128 [2024-12-15 13:03:29.454695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.128 [2024-12-15 13:03:29.455184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hDP 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hDP 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hDP 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hDP 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.128 [2024-12-15 13:03:29.770329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.128 [2024-12-15 13:03:29.786339] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.128 [2024-12-15 13:03:29.786532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.128 malloc0 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1029473 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1029473 /var/tmp/bdevperf.sock 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029473 ']' 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.128 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.129 13:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.129 [2024-12-15 13:03:29.915020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.129 [2024-12-15 13:03:29.915065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029473 ] 00:23:22.129 [2024-12-15 13:03:29.989972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.129 [2024-12-15 13:03:30.013290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.388 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.388 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:22.388 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hDP 00:23:22.648 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.648 [2024-12-15 13:03:30.502980] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.907 TLSTESTn1 00:23:22.907 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.907 Running I/O for 10 seconds... 00:23:24.782 5444.00 IOPS, 21.27 MiB/s [2024-12-15T12:03:34.068Z] 5524.00 IOPS, 21.58 MiB/s [2024-12-15T12:03:35.006Z] 5553.67 IOPS, 21.69 MiB/s [2024-12-15T12:03:35.948Z] 5574.00 IOPS, 21.77 MiB/s [2024-12-15T12:03:36.886Z] 5595.60 IOPS, 21.86 MiB/s [2024-12-15T12:03:37.823Z] 5604.17 IOPS, 21.89 MiB/s [2024-12-15T12:03:38.760Z] 5616.86 IOPS, 21.94 MiB/s [2024-12-15T12:03:40.138Z] 5585.75 IOPS, 21.82 MiB/s [2024-12-15T12:03:40.706Z] 5590.56 IOPS, 21.84 MiB/s [2024-12-15T12:03:40.966Z] 5583.00 IOPS, 21.81 MiB/s 00:23:33.059 Latency(us) 00:23:33.059 [2024-12-15T12:03:40.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.059 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.059 Verification LBA range: start 0x0 length 0x2000 00:23:33.059 TLSTESTn1 : 10.01 5588.14 21.83 0.00 0.00 22872.17 5586.16 34453.21 00:23:33.059 [2024-12-15T12:03:40.966Z] =================================================================================================================== 00:23:33.059 [2024-12-15T12:03:40.966Z] Total : 5588.14 21.83 0.00 0.00 22872.17 5586.16 34453.21 00:23:33.059 { 00:23:33.059 "results": [ 00:23:33.059 { 00:23:33.059 "job": "TLSTESTn1", 00:23:33.059 "core_mask": "0x4", 00:23:33.059 "workload": "verify", 00:23:33.059 "status": "finished", 00:23:33.059 "verify_range": { 00:23:33.059 "start": 0, 00:23:33.059 "length": 8192 00:23:33.059 }, 00:23:33.059 "queue_depth": 128, 00:23:33.059 "io_size": 4096, 00:23:33.059 "runtime": 10.013173, 00:23:33.059 "iops": 5588.1387448314335, 00:23:33.059 "mibps": 21.828666971997787, 00:23:33.059 "io_failed": 0, 00:23:33.059 "io_timeout": 0, 00:23:33.059 "avg_latency_us": 22872.16857382846, 00:23:33.059 "min_latency_us": 5586.1638095238095, 00:23:33.059 "max_latency_us": 34453.21142857143 00:23:33.059 } 00:23:33.059 ], 00:23:33.059 "core_count": 1 00:23:33.059 } 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:33.059 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:33.060 nvmf_trace.0 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1029473 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029473 ']' 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029473 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029473 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029473' 00:23:33.060 killing process with pid 1029473 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029473 00:23:33.060 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.060 00:23:33.060 Latency(us) 00:23:33.060 [2024-12-15T12:03:40.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.060 [2024-12-15T12:03:40.967Z] =================================================================================================================== 00:23:33.060 [2024-12-15T12:03:40.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.060 13:03:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029473 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.319 rmmod nvme_tcp 00:23:33.319 rmmod nvme_fabrics 00:23:33.319 rmmod nvme_keyring 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1029447 ']' 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1029447 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029447 ']' 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029447 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029447 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029447' 00:23:33.319 killing process with pid 1029447 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029447 00:23:33.319 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029447 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.579 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.486 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.486 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hDP 00:23:35.486 00:23:35.486 real 0m20.372s 00:23:35.486 user 0m21.162s 00:23:35.486 sys 0m9.656s 00:23:35.486 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.486 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:35.486 ************************************ 00:23:35.486 END TEST nvmf_fips 00:23:35.486 ************************************ 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:35.746 ************************************ 00:23:35.746 START TEST nvmf_control_msg_list 00:23:35.746 ************************************ 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:35.746 * Looking for test storage... 00:23:35.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.746 --rc genhtml_branch_coverage=1 00:23:35.746 --rc genhtml_function_coverage=1 00:23:35.746 --rc genhtml_legend=1 00:23:35.746 --rc geninfo_all_blocks=1 00:23:35.746 --rc geninfo_unexecuted_blocks=1 00:23:35.746 00:23:35.746 ' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.746 --rc genhtml_branch_coverage=1 00:23:35.746 --rc genhtml_function_coverage=1 00:23:35.746 --rc genhtml_legend=1 00:23:35.746 --rc geninfo_all_blocks=1 00:23:35.746 --rc geninfo_unexecuted_blocks=1 00:23:35.746 00:23:35.746 ' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.746 --rc genhtml_branch_coverage=1 00:23:35.746 --rc genhtml_function_coverage=1 00:23:35.746 --rc genhtml_legend=1 00:23:35.746 --rc geninfo_all_blocks=1 00:23:35.746 --rc geninfo_unexecuted_blocks=1 00:23:35.746 00:23:35.746 ' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:35.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.746 --rc genhtml_branch_coverage=1 00:23:35.746 --rc genhtml_function_coverage=1 00:23:35.746 --rc genhtml_legend=1 00:23:35.746 --rc geninfo_all_blocks=1 00:23:35.746 --rc geninfo_unexecuted_blocks=1 00:23:35.746 00:23:35.746 ' 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:35.746 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:35.747 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.006 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.007 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.007 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.007 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.007 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.007 13:03:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.581 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.581 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.581 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.581 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:42.582 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:42.582 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:42.582 Found net devices under 0000:af:00.0: cvl_0_0 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:42.582 Found net devices under 0000:af:00.1: cvl_0_1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.582 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:23:42.583 00:23:42.583 --- 10.0.0.2 ping statistics --- 00:23:42.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.583 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:23:42.583 00:23:42.583 --- 10.0.0.1 ping statistics --- 00:23:42.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.583 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1034722 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1034722 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1034722 ']' 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 [2024-12-15 13:03:49.729609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:42.583 [2024-12-15 13:03:49.729652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.583 [2024-12-15 13:03:49.806744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.583 [2024-12-15 13:03:49.828370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.583 [2024-12-15 13:03:49.828406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.583 [2024-12-15 13:03:49.828414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.583 [2024-12-15 13:03:49.828421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.583 [2024-12-15 13:03:49.828445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.583 [2024-12-15 13:03:49.828968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 [2024-12-15 13:03:49.967792] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 Malloc0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.583 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:42.583 [2024-12-15 13:03:50.016141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1034796 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1034798 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1034800 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:42.583 13:03:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1034796 00:23:42.583 [2024-12-15 13:03:50.110891] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:42.583 [2024-12-15 13:03:50.111100] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:42.583 [2024-12-15 13:03:50.111248] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:43.522 Initializing NVMe Controllers 00:23:43.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:43.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:43.522 Initialization complete. Launching workers. 00:23:43.522 ======================================================== 00:23:43.522 Latency(us) 00:23:43.522 Device Information : IOPS MiB/s Average min max 00:23:43.522 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3931.00 15.36 253.97 214.92 788.30 00:23:43.522 ======================================================== 00:23:43.522 Total : 3931.00 15.36 253.97 214.92 788.30 00:23:43.522 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1034798 00:23:43.522 Initializing NVMe Controllers 00:23:43.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:43.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:43.522 Initialization complete. Launching workers. 00:23:43.522 ======================================================== 00:23:43.522 Latency(us) 00:23:43.522 Device Information : IOPS MiB/s Average min max 00:23:43.522 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40909.72 40794.74 41134.84 00:23:43.522 ======================================================== 00:23:43.522 Total : 25.00 0.10 40909.72 40794.74 41134.84 00:23:43.522 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1034800 00:23:43.522 Initializing NVMe Controllers 00:23:43.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:43.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:43.522 Initialization complete. Launching workers. 00:23:43.522 ======================================================== 00:23:43.522 Latency(us) 00:23:43.522 Device Information : IOPS MiB/s Average min max 00:23:43.522 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3936.00 15.37 253.68 183.36 428.97 00:23:43.522 ======================================================== 00:23:43.522 Total : 3936.00 15.37 253.68 183.36 428.97 00:23:43.522 00:23:43.522 [2024-12-15 13:03:51.375066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b34d00 is same with the state(6) to be set 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.522 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.522 rmmod nvme_tcp 00:23:43.522 rmmod nvme_fabrics 00:23:43.522 rmmod nvme_keyring 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1034722 ']' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1034722 ']' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1034722' 00:23:43.782 killing process with pid 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1034722 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.782 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.321 00:23:46.321 real 0m10.276s 00:23:46.321 user 0m6.857s 00:23:46.321 sys 0m5.437s 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:46.321 ************************************ 00:23:46.321 END TEST nvmf_control_msg_list 00:23:46.321 ************************************ 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.321 ************************************ 00:23:46.321 START TEST nvmf_wait_for_buf 00:23:46.321 ************************************ 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:46.321 * Looking for test storage... 00:23:46.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.321 --rc genhtml_branch_coverage=1 00:23:46.321 --rc genhtml_function_coverage=1 00:23:46.321 --rc genhtml_legend=1 00:23:46.321 --rc geninfo_all_blocks=1 00:23:46.321 --rc geninfo_unexecuted_blocks=1 00:23:46.321 00:23:46.321 ' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.321 --rc genhtml_branch_coverage=1 00:23:46.321 --rc genhtml_function_coverage=1 00:23:46.321 --rc genhtml_legend=1 00:23:46.321 --rc geninfo_all_blocks=1 00:23:46.321 --rc geninfo_unexecuted_blocks=1 00:23:46.321 00:23:46.321 ' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.321 --rc genhtml_branch_coverage=1 00:23:46.321 --rc genhtml_function_coverage=1 00:23:46.321 --rc genhtml_legend=1 00:23:46.321 --rc geninfo_all_blocks=1 00:23:46.321 --rc geninfo_unexecuted_blocks=1 00:23:46.321 00:23:46.321 ' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.321 --rc genhtml_branch_coverage=1 00:23:46.321 --rc genhtml_function_coverage=1 00:23:46.321 --rc genhtml_legend=1 00:23:46.321 --rc geninfo_all_blocks=1 00:23:46.321 --rc geninfo_unexecuted_blocks=1 00:23:46.321 00:23:46.321 ' 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.321 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.322 13:03:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.322 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.322 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.322 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.322 13:03:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:52.895 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.895 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:52.896 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:52.896 Found net devices under 0000:af:00.0: cvl_0_0 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:52.896 Found net devices under 0000:af:00.1: cvl_0_1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:23:52.896 00:23:52.896 --- 10.0.0.2 ping statistics --- 00:23:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.896 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:52.896 00:23:52.896 --- 10.0.0.1 ping statistics --- 00:23:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.896 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1038443 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1038443 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1038443 ']' 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.896 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.896 [2024-12-15 13:03:59.915765] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:52.896 [2024-12-15 13:03:59.915806] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.896 [2024-12-15 13:03:59.995845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.896 [2024-12-15 13:04:00.019883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.896 [2024-12-15 13:04:00.019917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.897 [2024-12-15 13:04:00.019925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.897 [2024-12-15 13:04:00.019931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.897 [2024-12-15 13:04:00.019937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.897 [2024-12-15 13:04:00.020421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 Malloc0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 [2024-12-15 13:04:00.218203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.897 [2024-12-15 13:04:00.246374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.897 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.897 [2024-12-15 13:04:00.330540] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:53.834 Initializing NVMe Controllers 00:23:53.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:53.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:53.834 Initialization complete. Launching workers. 00:23:53.834 ======================================================== 00:23:53.834 Latency(us) 00:23:53.834 Device Information : IOPS MiB/s Average min max 00:23:53.834 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32224.55 7295.30 63855.70 00:23:53.834 ======================================================== 00:23:53.834 Total : 129.00 16.12 32224.55 7295.30 63855.70 00:23:53.834 00:23:53.834 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:53.834 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:53.834 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.834 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:53.834 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.093 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:54.093 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:54.093 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:54.093 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.094 rmmod nvme_tcp 00:23:54.094 rmmod nvme_fabrics 00:23:54.094 rmmod nvme_keyring 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1038443 ']' 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1038443 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1038443 ']' 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1038443 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038443 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038443' 00:23:54.094 killing process with pid 1038443 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1038443 00:23:54.094 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1038443 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.353 13:04:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.430 00:23:56.430 real 0m10.285s 00:23:56.430 user 0m3.915s 00:23:56.430 sys 0m4.816s 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.430 ************************************ 00:23:56.430 END TEST nvmf_wait_for_buf 00:23:56.430 ************************************ 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:56.430 ************************************ 00:23:56.430 START TEST nvmf_fuzz 00:23:56.430 ************************************ 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:56.430 * Looking for test storage... 00:23:56.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.430 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.710 --rc genhtml_branch_coverage=1 00:23:56.710 --rc genhtml_function_coverage=1 00:23:56.710 --rc genhtml_legend=1 00:23:56.710 --rc geninfo_all_blocks=1 00:23:56.710 --rc geninfo_unexecuted_blocks=1 00:23:56.710 00:23:56.710 ' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.710 --rc genhtml_branch_coverage=1 00:23:56.710 --rc genhtml_function_coverage=1 00:23:56.710 --rc genhtml_legend=1 00:23:56.710 --rc geninfo_all_blocks=1 00:23:56.710 --rc geninfo_unexecuted_blocks=1 00:23:56.710 00:23:56.710 ' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.710 --rc genhtml_branch_coverage=1 00:23:56.710 --rc genhtml_function_coverage=1 00:23:56.710 --rc genhtml_legend=1 00:23:56.710 --rc geninfo_all_blocks=1 00:23:56.710 --rc geninfo_unexecuted_blocks=1 00:23:56.710 00:23:56.710 ' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.710 --rc genhtml_branch_coverage=1 00:23:56.710 --rc genhtml_function_coverage=1 00:23:56.710 --rc genhtml_legend=1 00:23:56.710 --rc geninfo_all_blocks=1 00:23:56.710 --rc geninfo_unexecuted_blocks=1 00:23:56.710 00:23:56.710 ' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.710 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.711 13:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:03.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:03.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:03.285 Found net devices under 0000:af:00.0: cvl_0_0 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:03.285 Found net devices under 0000:af:00.1: cvl_0_1 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.285 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.286 13:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:24:03.286 00:24:03.286 --- 10.0.0.2 ping statistics --- 00:24:03.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.286 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:24:03.286 00:24:03.286 --- 10.0.0.1 ping statistics --- 00:24:03.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.286 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1042362 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1042362 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1042362 ']' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 Malloc0 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:03.286 13:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:35.377 Fuzzing completed. Shutting down the fuzz application 00:24:35.377 00:24:35.377 Dumping successful admin opcodes: 00:24:35.377 9, 10, 00:24:35.377 Dumping successful io opcodes: 00:24:35.377 0, 9, 00:24:35.377 NS: 0x2000008eff00 I/O qp, Total commands completed: 897062, total successful commands: 5226, random_seed: 2977042368 00:24:35.377 NS: 0x2000008eff00 admin qp, Total commands completed: 85792, total successful commands: 20, random_seed: 55444800 00:24:35.377 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:35.377 Fuzzing completed. Shutting down the fuzz application 00:24:35.377 00:24:35.377 Dumping successful admin opcodes: 00:24:35.377 00:24:35.377 Dumping successful io opcodes: 00:24:35.377 00:24:35.377 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4196657124 00:24:35.377 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 4196718426 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.377 rmmod nvme_tcp 00:24:35.377 rmmod nvme_fabrics 00:24:35.377 rmmod nvme_keyring 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1042362 ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1042362 ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042362' 00:24:35.377 killing process with pid 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1042362 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.377 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:37.285 00:24:37.285 real 0m40.623s 00:24:37.285 user 0m52.432s 00:24:37.285 sys 0m17.445s 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:37.285 ************************************ 00:24:37.285 END TEST nvmf_fuzz 00:24:37.285 ************************************ 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:37.285 ************************************ 00:24:37.285 START TEST nvmf_multiconnection 00:24:37.285 ************************************ 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:37.285 * Looking for test storage... 00:24:37.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.285 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.285 --rc genhtml_branch_coverage=1 00:24:37.285 --rc genhtml_function_coverage=1 00:24:37.285 --rc genhtml_legend=1 00:24:37.285 --rc geninfo_all_blocks=1 00:24:37.285 --rc geninfo_unexecuted_blocks=1 00:24:37.285 00:24:37.285 ' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.285 --rc genhtml_branch_coverage=1 00:24:37.285 --rc genhtml_function_coverage=1 00:24:37.285 --rc genhtml_legend=1 00:24:37.285 --rc geninfo_all_blocks=1 00:24:37.285 --rc geninfo_unexecuted_blocks=1 00:24:37.285 00:24:37.285 ' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.285 --rc genhtml_branch_coverage=1 00:24:37.285 --rc genhtml_function_coverage=1 00:24:37.285 --rc genhtml_legend=1 00:24:37.285 --rc geninfo_all_blocks=1 00:24:37.285 --rc geninfo_unexecuted_blocks=1 00:24:37.285 00:24:37.285 ' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.285 --rc genhtml_branch_coverage=1 00:24:37.285 --rc genhtml_function_coverage=1 00:24:37.285 --rc genhtml_legend=1 00:24:37.285 --rc geninfo_all_blocks=1 00:24:37.285 --rc geninfo_unexecuted_blocks=1 00:24:37.285 00:24:37.285 ' 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.285 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.286 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.854 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:43.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:43.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:43.855 Found net devices under 0000:af:00.0: cvl_0_0 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:43.855 Found net devices under 0000:af:00.1: cvl_0_1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:24:43.855 00:24:43.855 --- 10.0.0.2 ping statistics --- 00:24:43.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.855 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:43.855 00:24:43.855 --- 10.0.0.1 ping statistics --- 00:24:43.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.855 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.855 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1050925 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1050925 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1050925 ']' 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.856 13:04:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 [2024-12-15 13:04:50.976775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:43.856 [2024-12-15 13:04:50.976835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.856 [2024-12-15 13:04:51.057991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.856 [2024-12-15 13:04:51.082113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.856 [2024-12-15 13:04:51.082154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.856 [2024-12-15 13:04:51.082161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.856 [2024-12-15 13:04:51.082166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.856 [2024-12-15 13:04:51.082171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.856 [2024-12-15 13:04:51.083659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.856 [2024-12-15 13:04:51.083766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.856 [2024-12-15 13:04:51.083876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.856 [2024-12-15 13:04:51.083876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 [2024-12-15 13:04:51.224214] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 Malloc1 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 [2024-12-15 13:04:51.295035] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 Malloc2 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 Malloc3 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 Malloc4 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.856 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc5 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc6 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc7 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc8 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc9 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 Malloc10 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.857 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 Malloc11 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.858 13:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:45.236 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:45.236 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:45.236 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.236 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:45.236 13:04:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.141 13:04:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:48.524 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:48.524 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:48.524 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.524 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:48.524 13:04:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:50.430 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.431 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:51.368 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:51.368 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:51.368 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.368 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:51.368 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:53.272 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:53.272 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:53.272 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:24:53.531 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:53.531 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.531 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:53.531 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.531 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:54.909 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:54.909 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:54.909 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.909 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:54.909 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.930 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:57.865 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:57.865 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:57.865 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.865 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:57.865 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.399 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:01.408 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:01.408 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:01.408 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.408 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:01.408 13:05:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.311 13:05:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:04.686 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:04.686 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:04.686 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.686 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:04.686 13:05:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.590 13:05:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:07.966 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:07.966 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:07.966 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.966 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:07.966 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.500 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:11.436 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:11.436 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:11.436 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:11.436 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:11.436 13:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:13.339 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:13.339 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:13.339 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:13.597 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:13.597 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.597 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:13.597 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.598 13:05:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:14.974 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:14.974 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:14.974 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.974 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:14.974 13:05:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.880 13:05:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:18.785 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:18.785 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.785 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.785 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.785 13:05:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:20.691 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:20.691 [global] 00:25:20.691 thread=1 00:25:20.691 invalidate=1 00:25:20.691 rw=read 00:25:20.691 time_based=1 00:25:20.691 runtime=10 00:25:20.691 ioengine=libaio 00:25:20.691 direct=1 00:25:20.691 bs=262144 00:25:20.691 iodepth=64 00:25:20.691 norandommap=1 00:25:20.691 numjobs=1 00:25:20.691 00:25:20.691 [job0] 00:25:20.691 filename=/dev/nvme0n1 00:25:20.691 [job1] 00:25:20.691 filename=/dev/nvme10n1 00:25:20.691 [job2] 00:25:20.691 filename=/dev/nvme1n1 00:25:20.691 [job3] 00:25:20.691 filename=/dev/nvme2n1 00:25:20.691 [job4] 00:25:20.691 filename=/dev/nvme3n1 00:25:20.691 [job5] 00:25:20.691 filename=/dev/nvme4n1 00:25:20.691 [job6] 00:25:20.691 filename=/dev/nvme5n1 00:25:20.691 [job7] 00:25:20.691 filename=/dev/nvme6n1 00:25:20.691 [job8] 00:25:20.691 filename=/dev/nvme7n1 00:25:20.691 [job9] 00:25:20.691 filename=/dev/nvme8n1 00:25:20.691 [job10] 00:25:20.691 filename=/dev/nvme9n1 00:25:20.691 Could not set queue depth (nvme0n1) 00:25:20.691 Could not set queue depth (nvme10n1) 00:25:20.691 Could not set queue depth (nvme1n1) 00:25:20.691 Could not set queue depth (nvme2n1) 00:25:20.691 Could not set queue depth (nvme3n1) 00:25:20.691 Could not set queue depth (nvme4n1) 00:25:20.691 Could not set queue depth (nvme5n1) 00:25:20.691 Could not set queue depth (nvme6n1) 00:25:20.691 Could not set queue depth (nvme7n1) 00:25:20.691 Could not set queue depth (nvme8n1) 00:25:20.691 Could not set queue depth (nvme9n1) 00:25:20.950 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:20.950 fio-3.35 00:25:20.950 Starting 11 threads 00:25:33.162 00:25:33.162 job0: (groupid=0, jobs=1): err= 0: pid=1057232: Sun Dec 15 13:05:39 2024 00:25:33.162 read: IOPS=234, BW=58.7MiB/s (61.6MB/s)(591MiB/10061msec) 00:25:33.162 slat (usec): min=15, max=373240, avg=3076.22, stdev=17184.05 00:25:33.162 clat (usec): min=1071, max=1071.3k, avg=269044.56, stdev=252693.49 00:25:33.162 lat (usec): min=1106, max=1256.7k, avg=272120.78, stdev=255498.84 00:25:33.162 clat percentiles (msec): 00:25:33.162 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 25], 20.00th=[ 77], 00:25:33.162 | 30.00th=[ 107], 40.00th=[ 138], 50.00th=[ 161], 60.00th=[ 213], 00:25:33.162 | 70.00th=[ 317], 80.00th=[ 502], 90.00th=[ 709], 95.00th=[ 768], 00:25:33.162 | 99.00th=[ 1011], 99.50th=[ 1028], 99.90th=[ 1070], 99.95th=[ 1070], 00:25:33.162 | 99.99th=[ 1070] 00:25:33.162 bw ( KiB/s): min= 8192, max=200192, per=7.12%, avg=58872.05, stdev=52137.17, samples=20 00:25:33.162 iops : min= 32, max= 782, avg=229.95, stdev=203.65, samples=20 00:25:33.162 lat (msec) : 2=0.13%, 4=3.09%, 10=3.26%, 20=1.52%, 50=9.22% 00:25:33.162 lat (msec) : 100=10.49%, 250=34.77%, 500=17.47%, 750=14.17%, 1000=4.78% 00:25:33.162 lat (msec) : 2000=1.10% 00:25:33.162 cpu : usr=0.10%, sys=0.81%, ctx=546, majf=0, minf=3722 00:25:33.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:25:33.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.162 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.162 job1: (groupid=0, jobs=1): err= 0: pid=1057233: Sun Dec 15 13:05:39 2024 00:25:33.162 read: IOPS=470, BW=118MiB/s (123MB/s)(1191MiB/10129msec) 00:25:33.162 slat (usec): min=15, max=478195, avg=1497.66, stdev=10374.50 00:25:33.162 clat (usec): min=1245, max=722923, avg=134496.13, stdev=123588.53 00:25:33.162 lat (usec): min=1663, max=957654, avg=135993.80, stdev=125349.84 00:25:33.162 clat percentiles (msec): 00:25:33.162 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 43], 00:25:33.162 | 30.00th=[ 60], 40.00th=[ 79], 50.00th=[ 96], 60.00th=[ 115], 00:25:33.162 | 70.00th=[ 140], 80.00th=[ 192], 90.00th=[ 317], 95.00th=[ 414], 00:25:33.162 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 709], 00:25:33.162 | 99.99th=[ 726] 00:25:33.162 bw ( KiB/s): min=28672, max=359424, per=14.54%, avg=120268.80, stdev=91597.76, samples=20 00:25:33.162 iops : min= 112, max= 1404, avg=469.80, stdev=357.80, samples=20 00:25:33.162 lat (msec) : 2=0.04%, 4=0.06%, 10=0.36%, 20=2.52%, 50=21.29% 00:25:33.162 lat (msec) : 100=28.03%, 250=34.48%, 500=10.44%, 750=2.77% 00:25:33.162 cpu : usr=0.23%, sys=1.69%, ctx=926, majf=0, minf=4097 00:25:33.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:33.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.162 issued rwts: total=4762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.162 job2: (groupid=0, jobs=1): err= 0: pid=1057234: Sun Dec 15 13:05:39 2024 00:25:33.162 read: IOPS=305, BW=76.3MiB/s (80.0MB/s)(768MiB/10058msec) 00:25:33.162 slat (usec): min=7, max=1065.5k, avg=1789.82, stdev=22663.42 00:25:33.162 clat (usec): min=1819, max=1238.5k, avg=207615.93, stdev=287009.62 00:25:33.162 lat (usec): min=1857, max=1684.5k, avg=209405.75, stdev=289215.81 00:25:33.162 clat percentiles (msec): 00:25:33.162 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 25], 00:25:33.162 | 30.00th=[ 34], 40.00th=[ 61], 50.00th=[ 80], 60.00th=[ 124], 00:25:33.162 | 70.00th=[ 184], 80.00th=[ 321], 90.00th=[ 684], 95.00th=[ 927], 00:25:33.162 | 99.00th=[ 1183], 99.50th=[ 1234], 99.90th=[ 1234], 99.95th=[ 1234], 00:25:33.162 | 99.99th=[ 1234] 00:25:33.162 bw ( KiB/s): min= 6656, max=269312, per=9.80%, avg=81052.37, stdev=79386.05, samples=19 00:25:33.162 iops : min= 26, max= 1052, avg=316.58, stdev=310.11, samples=19 00:25:33.162 lat (msec) : 2=0.23%, 4=3.55%, 10=6.48%, 20=5.86%, 50=18.10% 00:25:33.162 lat (msec) : 100=19.31%, 250=22.57%, 500=9.70%, 750=6.02%, 1000=4.17% 00:25:33.162 lat (msec) : 2000=4.01% 00:25:33.162 cpu : usr=0.13%, sys=1.24%, ctx=997, majf=0, minf=4097 00:25:33.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:25:33.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.162 issued rwts: total=3071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.162 job3: (groupid=0, jobs=1): err= 0: pid=1057235: Sun Dec 15 13:05:39 2024 00:25:33.162 read: IOPS=370, BW=92.6MiB/s (97.1MB/s)(934MiB/10085msec) 00:25:33.162 slat (usec): min=13, max=246340, avg=1616.62, stdev=9983.92 00:25:33.162 clat (usec): min=1131, max=904532, avg=171080.21, stdev=159952.83 00:25:33.162 lat (usec): min=1162, max=985281, avg=172696.82, stdev=161870.57 00:25:33.162 clat percentiles (msec): 00:25:33.162 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 27], 20.00th=[ 57], 00:25:33.162 | 30.00th=[ 101], 40.00th=[ 118], 50.00th=[ 131], 60.00th=[ 153], 00:25:33.162 | 70.00th=[ 169], 80.00th=[ 207], 90.00th=[ 388], 95.00th=[ 584], 00:25:33.162 | 99.00th=[ 776], 99.50th=[ 810], 99.90th=[ 877], 99.95th=[ 902], 00:25:33.162 | 99.99th=[ 902] 00:25:33.162 bw ( KiB/s): min=15872, max=236032, per=11.36%, avg=93957.60, stdev=59476.37, samples=20 00:25:33.162 iops : min= 62, max= 922, avg=367.00, stdev=232.34, samples=20 00:25:33.162 lat (msec) : 2=0.88%, 4=0.83%, 10=4.42%, 20=1.37%, 50=10.90% 00:25:33.162 lat (msec) : 100=11.73%, 250=53.29%, 500=10.82%, 750=4.74%, 1000=1.02% 00:25:33.162 cpu : usr=0.12%, sys=1.40%, ctx=1036, majf=0, minf=4097 00:25:33.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:33.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.162 issued rwts: total=3734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.162 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.162 job4: (groupid=0, jobs=1): err= 0: pid=1057236: Sun Dec 15 13:05:39 2024 00:25:33.162 read: IOPS=160, BW=40.2MiB/s (42.1MB/s)(407MiB/10131msec) 00:25:33.162 slat (usec): min=15, max=528124, avg=3905.33, stdev=28923.39 00:25:33.162 clat (usec): min=1390, max=1283.1k, avg=393930.82, stdev=330247.60 00:25:33.162 lat (usec): min=1416, max=1283.2k, avg=397836.16, stdev=334201.46 00:25:33.162 clat percentiles (msec): 00:25:33.162 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 27], 20.00th=[ 122], 00:25:33.162 | 30.00th=[ 184], 40.00th=[ 224], 50.00th=[ 279], 60.00th=[ 368], 00:25:33.162 | 70.00th=[ 451], 80.00th=[ 760], 90.00th=[ 936], 95.00th=[ 1062], 00:25:33.162 | 99.00th=[ 1234], 99.50th=[ 1284], 99.90th=[ 1284], 99.95th=[ 1284], 00:25:33.162 | 99.99th=[ 1284] 00:25:33.162 bw ( KiB/s): min= 3072, max=124928, per=4.84%, avg=40032.05, stdev=31419.09, samples=20 00:25:33.162 iops : min= 12, max= 488, avg=156.35, stdev=122.71, samples=20 00:25:33.163 lat (msec) : 2=0.18%, 4=0.98%, 10=3.44%, 20=3.75%, 50=4.12% 00:25:33.163 lat (msec) : 100=5.84%, 250=26.78%, 500=27.03%, 750=6.76%, 1000=14.68% 00:25:33.163 lat (msec) : 2000=6.45% 00:25:33.163 cpu : usr=0.08%, sys=0.63%, ctx=415, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=1628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job5: (groupid=0, jobs=1): err= 0: pid=1057237: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=475, BW=119MiB/s (125MB/s)(1200MiB/10091msec) 00:25:33.163 slat (usec): min=15, max=416147, avg=1829.18, stdev=12303.10 00:25:33.163 clat (usec): min=1489, max=996637, avg=132605.04, stdev=170865.82 00:25:33.163 lat (usec): min=1574, max=996680, avg=134434.23, stdev=173206.50 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 29], 00:25:33.163 | 30.00th=[ 43], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 96], 00:25:33.163 | 70.00th=[ 109], 80.00th=[ 140], 90.00th=[ 359], 95.00th=[ 592], 00:25:33.163 | 99.00th=[ 785], 99.50th=[ 927], 99.90th=[ 995], 99.95th=[ 995], 00:25:33.163 | 99.99th=[ 995] 00:25:33.163 bw ( KiB/s): min=17920, max=456192, per=14.65%, avg=121191.65, stdev=116952.18, samples=20 00:25:33.163 iops : min= 70, max= 1782, avg=473.40, stdev=456.84, samples=20 00:25:33.163 lat (msec) : 2=0.08%, 4=0.15%, 10=0.73%, 20=2.48%, 50=30.11% 00:25:33.163 lat (msec) : 100=27.80%, 250=23.69%, 500=8.69%, 750=4.86%, 1000=1.42% 00:25:33.163 cpu : usr=0.18%, sys=1.43%, ctx=919, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=4799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job6: (groupid=0, jobs=1): err= 0: pid=1057238: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=197, BW=49.3MiB/s (51.7MB/s)(500MiB/10136msec) 00:25:33.163 slat (usec): min=15, max=298347, avg=2670.14, stdev=17452.28 00:25:33.163 clat (msec): min=16, max=879, avg=321.48, stdev=262.91 00:25:33.163 lat (msec): min=16, max=1030, avg=324.15, stdev=264.99 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 34], 00:25:33.163 | 30.00th=[ 64], 40.00th=[ 167], 50.00th=[ 275], 60.00th=[ 430], 00:25:33.163 | 70.00th=[ 518], 80.00th=[ 617], 90.00th=[ 693], 95.00th=[ 743], 00:25:33.163 | 99.00th=[ 793], 99.50th=[ 844], 99.90th=[ 860], 99.95th=[ 877], 00:25:33.163 | 99.99th=[ 877] 00:25:33.163 bw ( KiB/s): min=12288, max=301056, per=5.99%, avg=49544.00, stdev=62286.92, samples=20 00:25:33.163 iops : min= 48, max= 1176, avg=193.50, stdev=243.29, samples=20 00:25:33.163 lat (msec) : 20=0.60%, 50=27.86%, 100=5.40%, 250=15.21%, 500=18.36% 00:25:33.163 lat (msec) : 750=28.41%, 1000=4.15% 00:25:33.163 cpu : usr=0.06%, sys=0.82%, ctx=278, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job7: (groupid=0, jobs=1): err= 0: pid=1057239: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=141, BW=35.3MiB/s (37.1MB/s)(358MiB/10131msec) 00:25:33.163 slat (usec): min=16, max=244814, avg=5977.41, stdev=23486.64 00:25:33.163 clat (msec): min=31, max=1017, avg=446.30, stdev=249.43 00:25:33.163 lat (msec): min=31, max=1017, avg=452.27, stdev=252.41 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 45], 5.00th=[ 79], 10.00th=[ 123], 20.00th=[ 211], 00:25:33.163 | 30.00th=[ 264], 40.00th=[ 338], 50.00th=[ 409], 60.00th=[ 542], 00:25:33.163 | 70.00th=[ 617], 80.00th=[ 676], 90.00th=[ 776], 95.00th=[ 877], 00:25:33.163 | 99.00th=[ 969], 99.50th=[ 969], 99.90th=[ 1020], 99.95th=[ 1020], 00:25:33.163 | 99.99th=[ 1020] 00:25:33.163 bw ( KiB/s): min=12800, max=100352, per=4.46%, avg=36897.89, stdev=22346.38, samples=19 00:25:33.163 iops : min= 50, max= 392, avg=144.11, stdev=87.25, samples=19 00:25:33.163 lat (msec) : 50=2.09%, 100=4.61%, 250=21.23%, 500=27.86%, 750=32.05% 00:25:33.163 lat (msec) : 1000=11.73%, 2000=0.42% 00:25:33.163 cpu : usr=0.06%, sys=0.62%, ctx=205, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job8: (groupid=0, jobs=1): err= 0: pid=1057240: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=282, BW=70.7MiB/s (74.2MB/s)(717MiB/10132msec) 00:25:33.163 slat (usec): min=12, max=322005, avg=2576.37, stdev=17241.97 00:25:33.163 clat (usec): min=1425, max=1215.2k, avg=223380.18, stdev=227348.40 00:25:33.163 lat (usec): min=1482, max=1215.3k, avg=225956.55, stdev=230359.06 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 45], 20.00th=[ 65], 00:25:33.163 | 30.00th=[ 82], 40.00th=[ 108], 50.00th=[ 136], 60.00th=[ 163], 00:25:33.163 | 70.00th=[ 213], 80.00th=[ 376], 90.00th=[ 634], 95.00th=[ 751], 00:25:33.163 | 99.00th=[ 902], 99.50th=[ 1020], 99.90th=[ 1150], 99.95th=[ 1217], 00:25:33.163 | 99.99th=[ 1217] 00:25:33.163 bw ( KiB/s): min= 7680, max=236032, per=8.68%, avg=71756.80, stdev=67702.97, samples=20 00:25:33.163 iops : min= 30, max= 922, avg=280.30, stdev=264.46, samples=20 00:25:33.163 lat (msec) : 2=0.10%, 4=0.70%, 10=1.33%, 20=1.46%, 50=9.73% 00:25:33.163 lat (msec) : 100=24.10%, 250=35.65%, 500=13.50%, 750=9.03%, 1000=3.80% 00:25:33.163 lat (msec) : 2000=0.59% 00:25:33.163 cpu : usr=0.10%, sys=1.16%, ctx=509, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=2867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job9: (groupid=0, jobs=1): err= 0: pid=1057241: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=352, BW=88.2MiB/s (92.5MB/s)(894MiB/10131msec) 00:25:33.163 slat (usec): min=11, max=267281, avg=1728.01, stdev=11745.37 00:25:33.163 clat (usec): min=1234, max=1119.1k, avg=179429.72, stdev=189219.97 00:25:33.163 lat (usec): min=1265, max=1148.8k, avg=181157.73, stdev=191402.11 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 30], 00:25:33.163 | 30.00th=[ 63], 40.00th=[ 94], 50.00th=[ 136], 60.00th=[ 176], 00:25:33.163 | 70.00th=[ 207], 80.00th=[ 251], 90.00th=[ 401], 95.00th=[ 609], 00:25:33.163 | 99.00th=[ 869], 99.50th=[ 969], 99.90th=[ 1070], 99.95th=[ 1116], 00:25:33.163 | 99.99th=[ 1116] 00:25:33.163 bw ( KiB/s): min=17408, max=374272, per=10.87%, avg=89881.60, stdev=84699.54, samples=20 00:25:33.163 iops : min= 68, max= 1462, avg=351.10, stdev=330.86, samples=20 00:25:33.163 lat (msec) : 2=0.17%, 4=0.48%, 10=5.62%, 20=8.59%, 50=12.34% 00:25:33.163 lat (msec) : 100=14.77%, 250=37.85%, 500=13.82%, 750=2.66%, 1000=3.33% 00:25:33.163 lat (msec) : 2000=0.39% 00:25:33.163 cpu : usr=0.11%, sys=1.25%, ctx=1026, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=3575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 job10: (groupid=0, jobs=1): err= 0: pid=1057242: Sun Dec 15 13:05:39 2024 00:25:33.163 read: IOPS=249, BW=62.3MiB/s (65.4MB/s)(629MiB/10092msec) 00:25:33.163 slat (usec): min=15, max=507066, avg=1787.33, stdev=15114.98 00:25:33.163 clat (usec): min=679, max=1060.1k, avg=254609.94, stdev=267514.72 00:25:33.163 lat (usec): min=704, max=1197.7k, avg=256397.27, stdev=269103.09 00:25:33.163 clat percentiles (msec): 00:25:33.163 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 39], 00:25:33.163 | 30.00th=[ 61], 40.00th=[ 85], 50.00th=[ 120], 60.00th=[ 213], 00:25:33.163 | 70.00th=[ 368], 80.00th=[ 518], 90.00th=[ 701], 95.00th=[ 760], 00:25:33.163 | 99.00th=[ 944], 99.50th=[ 995], 99.90th=[ 1062], 99.95th=[ 1062], 00:25:33.163 | 99.99th=[ 1062] 00:25:33.163 bw ( KiB/s): min= 5632, max=268288, per=7.59%, avg=62796.80, stdev=66374.69, samples=20 00:25:33.163 iops : min= 22, max= 1048, avg=245.30, stdev=259.28, samples=20 00:25:33.163 lat (usec) : 750=0.12%, 1000=0.08% 00:25:33.163 lat (msec) : 2=0.16%, 4=3.93%, 10=8.55%, 20=0.28%, 50=13.28% 00:25:33.163 lat (msec) : 100=19.52%, 250=16.81%, 500=16.81%, 750=14.19%, 1000=5.92% 00:25:33.163 lat (msec) : 2000=0.36% 00:25:33.163 cpu : usr=0.09%, sys=0.94%, ctx=667, majf=0, minf=4097 00:25:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.163 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.163 00:25:33.163 Run status group 0 (all jobs): 00:25:33.163 READ: bw=808MiB/s (847MB/s), 35.3MiB/s-119MiB/s (37.1MB/s-125MB/s), io=8187MiB (8584MB), run=10058-10136msec 00:25:33.163 00:25:33.163 Disk stats (read/write): 00:25:33.163 nvme0n1: ios=4586/0, merge=0/0, ticks=1242953/0, in_queue=1242953, util=97.44% 00:25:33.163 nvme10n1: ios=9377/0, merge=0/0, ticks=1221333/0, in_queue=1221333, util=97.60% 00:25:33.163 nvme1n1: ios=5987/0, merge=0/0, ticks=1248846/0, in_queue=1248846, util=97.88% 00:25:33.163 nvme2n1: ios=7299/0, merge=0/0, ticks=1229088/0, in_queue=1229088, util=97.98% 00:25:33.163 nvme3n1: ios=3119/0, merge=0/0, ticks=1222184/0, in_queue=1222184, util=98.09% 00:25:33.163 nvme4n1: ios=9449/0, merge=0/0, ticks=1216653/0, in_queue=1216653, util=98.39% 00:25:33.163 nvme5n1: ios=3842/0, merge=0/0, ticks=1222279/0, in_queue=1222279, util=98.50% 00:25:33.163 nvme6n1: ios=2736/0, merge=0/0, ticks=1221969/0, in_queue=1221969, util=98.61% 00:25:33.163 nvme7n1: ios=5602/0, merge=0/0, ticks=1230254/0, in_queue=1230254, util=98.97% 00:25:33.164 nvme8n1: ios=7023/0, merge=0/0, ticks=1223857/0, in_queue=1223857, util=99.13% 00:25:33.164 nvme9n1: ios=4889/0, merge=0/0, ticks=1224167/0, in_queue=1224167, util=99.24% 00:25:33.164 13:05:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:33.164 [global] 00:25:33.164 thread=1 00:25:33.164 invalidate=1 00:25:33.164 rw=randwrite 00:25:33.164 time_based=1 00:25:33.164 runtime=10 00:25:33.164 ioengine=libaio 00:25:33.164 direct=1 00:25:33.164 bs=262144 00:25:33.164 iodepth=64 00:25:33.164 norandommap=1 00:25:33.164 numjobs=1 00:25:33.164 00:25:33.164 [job0] 00:25:33.164 filename=/dev/nvme0n1 00:25:33.164 [job1] 00:25:33.164 filename=/dev/nvme10n1 00:25:33.164 [job2] 00:25:33.164 filename=/dev/nvme1n1 00:25:33.164 [job3] 00:25:33.164 filename=/dev/nvme2n1 00:25:33.164 [job4] 00:25:33.164 filename=/dev/nvme3n1 00:25:33.164 [job5] 00:25:33.164 filename=/dev/nvme4n1 00:25:33.164 [job6] 00:25:33.164 filename=/dev/nvme5n1 00:25:33.164 [job7] 00:25:33.164 filename=/dev/nvme6n1 00:25:33.164 [job8] 00:25:33.164 filename=/dev/nvme7n1 00:25:33.164 [job9] 00:25:33.164 filename=/dev/nvme8n1 00:25:33.164 [job10] 00:25:33.164 filename=/dev/nvme9n1 00:25:33.164 Could not set queue depth (nvme0n1) 00:25:33.164 Could not set queue depth (nvme10n1) 00:25:33.164 Could not set queue depth (nvme1n1) 00:25:33.164 Could not set queue depth (nvme2n1) 00:25:33.164 Could not set queue depth (nvme3n1) 00:25:33.164 Could not set queue depth (nvme4n1) 00:25:33.164 Could not set queue depth (nvme5n1) 00:25:33.164 Could not set queue depth (nvme6n1) 00:25:33.164 Could not set queue depth (nvme7n1) 00:25:33.164 Could not set queue depth (nvme8n1) 00:25:33.164 Could not set queue depth (nvme9n1) 00:25:33.164 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:33.164 fio-3.35 00:25:33.164 Starting 11 threads 00:25:43.145 00:25:43.145 job0: (groupid=0, jobs=1): err= 0: pid=1058319: Sun Dec 15 13:05:50 2024 00:25:43.145 write: IOPS=329, BW=82.3MiB/s (86.3MB/s)(832MiB/10108msec); 0 zone resets 00:25:43.145 slat (usec): min=19, max=61441, avg=2954.05, stdev=6099.22 00:25:43.145 clat (msec): min=6, max=398, avg=191.49, stdev=90.80 00:25:43.145 lat (msec): min=6, max=398, avg=194.45, stdev=91.97 00:25:43.145 clat percentiles (msec): 00:25:43.145 | 1.00th=[ 39], 5.00th=[ 85], 10.00th=[ 108], 20.00th=[ 117], 00:25:43.145 | 30.00th=[ 120], 40.00th=[ 126], 50.00th=[ 157], 60.00th=[ 211], 00:25:43.145 | 70.00th=[ 255], 80.00th=[ 296], 90.00th=[ 326], 95.00th=[ 347], 00:25:43.145 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 397], 99.95th=[ 401], 00:25:43.145 | 99.99th=[ 401] 00:25:43.145 bw ( KiB/s): min=43008, max=160256, per=6.92%, avg=83532.80, stdev=37501.95, samples=20 00:25:43.145 iops : min= 168, max= 626, avg=326.30, stdev=146.49, samples=20 00:25:43.145 lat (msec) : 10=0.15%, 20=0.06%, 50=1.02%, 100=7.64%, 250=60.16% 00:25:43.145 lat (msec) : 500=30.97% 00:25:43.145 cpu : usr=0.78%, sys=1.15%, ctx=887, majf=0, minf=1 00:25:43.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.145 issued rwts: total=0,3326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.145 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.145 job1: (groupid=0, jobs=1): err= 0: pid=1058348: Sun Dec 15 13:05:50 2024 00:25:43.145 write: IOPS=576, BW=144MiB/s (151MB/s)(1461MiB/10134msec); 0 zone resets 00:25:43.145 slat (usec): min=26, max=60686, avg=1326.52, stdev=4016.03 00:25:43.145 clat (usec): min=671, max=388670, avg=109550.49, stdev=95812.33 00:25:43.145 lat (usec): min=709, max=395503, avg=110877.01, stdev=97063.24 00:25:43.145 clat percentiles (msec): 00:25:43.145 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 44], 00:25:43.145 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 94], 00:25:43.145 | 70.00th=[ 131], 80.00th=[ 190], 90.00th=[ 284], 95.00th=[ 317], 00:25:43.145 | 99.00th=[ 368], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 388], 00:25:43.145 | 99.99th=[ 388] 00:25:43.145 bw ( KiB/s): min=49152, max=360448, per=12.25%, avg=147993.60, stdev=110125.61, samples=20 00:25:43.145 iops : min= 192, max= 1408, avg=578.10, stdev=430.18, samples=20 00:25:43.145 lat (usec) : 750=0.07%, 1000=0.05% 00:25:43.145 lat (msec) : 2=0.41%, 4=0.29%, 10=1.68%, 20=2.31%, 50=46.32% 00:25:43.145 lat (msec) : 100=10.69%, 250=25.68%, 500=12.49% 00:25:43.145 cpu : usr=1.23%, sys=1.84%, ctx=2706, majf=0, minf=1 00:25:43.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.145 issued rwts: total=0,5844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.145 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.145 job2: (groupid=0, jobs=1): err= 0: pid=1058367: Sun Dec 15 13:05:50 2024 00:25:43.145 write: IOPS=418, BW=105MiB/s (110MB/s)(1062MiB/10158msec); 0 zone resets 00:25:43.145 slat (usec): min=22, max=64409, avg=2081.12, stdev=5015.75 00:25:43.145 clat (msec): min=3, max=470, avg=150.83, stdev=94.02 00:25:43.145 lat (msec): min=3, max=470, avg=152.92, stdev=95.19 00:25:43.145 clat percentiles (msec): 00:25:43.145 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 80], 00:25:43.145 | 30.00th=[ 87], 40.00th=[ 109], 50.00th=[ 118], 60.00th=[ 129], 00:25:43.145 | 70.00th=[ 201], 80.00th=[ 239], 90.00th=[ 305], 95.00th=[ 330], 00:25:43.145 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 451], 99.95th=[ 451], 00:25:43.145 | 99.99th=[ 472] 00:25:43.145 bw ( KiB/s): min=43008, max=321024, per=8.87%, avg=107136.00, stdev=65624.55, samples=20 00:25:43.145 iops : min= 168, max= 1254, avg=418.50, stdev=256.35, samples=20 00:25:43.145 lat (msec) : 4=0.07%, 10=0.21%, 20=0.68%, 50=11.60%, 100=23.77% 00:25:43.145 lat (msec) : 250=45.92%, 500=17.75% 00:25:43.145 cpu : usr=0.88%, sys=1.14%, ctx=1449, majf=0, minf=1 00:25:43.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.145 issued rwts: total=0,4249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.145 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.145 job3: (groupid=0, jobs=1): err= 0: pid=1058377: Sun Dec 15 13:05:50 2024 00:25:43.145 write: IOPS=511, BW=128MiB/s (134MB/s)(1288MiB/10072msec); 0 zone resets 00:25:43.145 slat (usec): min=28, max=86343, avg=1832.93, stdev=4358.33 00:25:43.145 clat (msec): min=12, max=372, avg=123.20, stdev=81.63 00:25:43.145 lat (msec): min=12, max=372, avg=125.03, stdev=82.69 00:25:43.145 clat percentiles (msec): 00:25:43.145 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:25:43.145 | 30.00th=[ 47], 40.00th=[ 78], 50.00th=[ 109], 60.00th=[ 138], 00:25:43.145 | 70.00th=[ 148], 80.00th=[ 180], 90.00th=[ 241], 95.00th=[ 313], 00:25:43.145 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:25:43.145 | 99.99th=[ 372] 00:25:43.145 bw ( KiB/s): min=45056, max=364032, per=10.78%, avg=130252.80, stdev=87321.29, samples=20 00:25:43.145 iops : min= 176, max= 1422, avg=508.80, stdev=341.10, samples=20 00:25:43.145 lat (msec) : 20=0.16%, 50=30.62%, 100=16.40%, 250=43.56%, 500=9.26% 00:25:43.145 cpu : usr=1.29%, sys=1.64%, ctx=1418, majf=0, minf=1 00:25:43.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,5151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job4: (groupid=0, jobs=1): err= 0: pid=1058384: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=420, BW=105MiB/s (110MB/s)(1064MiB/10127msec); 0 zone resets 00:25:43.146 slat (usec): min=17, max=75535, avg=1769.12, stdev=4884.92 00:25:43.146 clat (msec): min=2, max=416, avg=150.46, stdev=92.76 00:25:43.146 lat (msec): min=3, max=422, avg=152.23, stdev=93.93 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 48], 20.00th=[ 75], 00:25:43.146 | 30.00th=[ 88], 40.00th=[ 113], 50.00th=[ 140], 60.00th=[ 146], 00:25:43.146 | 70.00th=[ 180], 80.00th=[ 222], 90.00th=[ 288], 95.00th=[ 363], 00:25:43.146 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 414], 00:25:43.146 | 99.99th=[ 418] 00:25:43.146 bw ( KiB/s): min=40960, max=189952, per=8.89%, avg=107340.80, stdev=43369.08, samples=20 00:25:43.146 iops : min= 160, max= 742, avg=419.30, stdev=169.41, samples=20 00:25:43.146 lat (msec) : 4=0.05%, 10=1.25%, 20=2.11%, 50=7.14%, 100=25.73% 00:25:43.146 lat (msec) : 250=48.99%, 500=14.73% 00:25:43.146 cpu : usr=0.87%, sys=1.45%, ctx=2126, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job5: (groupid=0, jobs=1): err= 0: pid=1058405: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=326, BW=81.5MiB/s (85.5MB/s)(828MiB/10157msec); 0 zone resets 00:25:43.146 slat (usec): min=25, max=102214, avg=2097.62, stdev=6390.85 00:25:43.146 clat (usec): min=834, max=478932, avg=193662.13, stdev=104675.75 00:25:43.146 lat (usec): min=891, max=478972, avg=195759.75, stdev=106270.01 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 57], 20.00th=[ 95], 00:25:43.146 | 30.00th=[ 136], 40.00th=[ 155], 50.00th=[ 194], 60.00th=[ 226], 00:25:43.146 | 70.00th=[ 249], 80.00th=[ 279], 90.00th=[ 342], 95.00th=[ 380], 00:25:43.146 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 460], 99.95th=[ 460], 00:25:43.146 | 99.99th=[ 481] 00:25:43.146 bw ( KiB/s): min=40960, max=183808, per=6.89%, avg=83174.40, stdev=37582.51, samples=20 00:25:43.146 iops : min= 160, max= 718, avg=324.90, stdev=146.81, samples=20 00:25:43.146 lat (usec) : 1000=0.06% 00:25:43.146 lat (msec) : 2=0.39%, 4=0.45%, 10=0.66%, 20=2.78%, 50=4.62% 00:25:43.146 lat (msec) : 100=12.07%, 250=49.14%, 500=29.82% 00:25:43.146 cpu : usr=0.80%, sys=1.10%, ctx=1982, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,3313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job6: (groupid=0, jobs=1): err= 0: pid=1058412: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=320, BW=80.1MiB/s (84.0MB/s)(810MiB/10114msec); 0 zone resets 00:25:43.146 slat (usec): min=27, max=107338, avg=2960.95, stdev=6258.49 00:25:43.146 clat (msec): min=3, max=424, avg=196.72, stdev=90.42 00:25:43.146 lat (msec): min=3, max=424, avg=199.68, stdev=91.61 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 62], 5.00th=[ 102], 10.00th=[ 112], 20.00th=[ 118], 00:25:43.146 | 30.00th=[ 121], 40.00th=[ 129], 50.00th=[ 174], 60.00th=[ 215], 00:25:43.146 | 70.00th=[ 253], 80.00th=[ 300], 90.00th=[ 330], 95.00th=[ 359], 00:25:43.146 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 405], 00:25:43.146 | 99.99th=[ 426] 00:25:43.146 bw ( KiB/s): min=43008, max=145408, per=6.73%, avg=81331.20, stdev=34894.88, samples=20 00:25:43.146 iops : min= 168, max= 568, avg=317.70, stdev=136.31, samples=20 00:25:43.146 lat (msec) : 4=0.03%, 10=0.37%, 20=0.25%, 50=0.12%, 100=4.01% 00:25:43.146 lat (msec) : 250=64.72%, 500=30.49% 00:25:43.146 cpu : usr=0.94%, sys=0.96%, ctx=918, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,3240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job7: (groupid=0, jobs=1): err= 0: pid=1058416: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=293, BW=73.4MiB/s (77.0MB/s)(746MiB/10154msec); 0 zone resets 00:25:43.146 slat (usec): min=22, max=79815, avg=2551.85, stdev=6598.29 00:25:43.146 clat (usec): min=1417, max=486147, avg=215221.14, stdev=107450.55 00:25:43.146 lat (usec): min=1470, max=486204, avg=217772.98, stdev=108984.15 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 62], 20.00th=[ 118], 00:25:43.146 | 30.00th=[ 136], 40.00th=[ 192], 50.00th=[ 226], 60.00th=[ 249], 00:25:43.146 | 70.00th=[ 292], 80.00th=[ 321], 90.00th=[ 359], 95.00th=[ 376], 00:25:43.146 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 468], 99.95th=[ 485], 00:25:43.146 | 99.99th=[ 485] 00:25:43.146 bw ( KiB/s): min=43008, max=140288, per=6.19%, avg=74752.00, stdev=32026.56, samples=20 00:25:43.146 iops : min= 168, max= 548, avg=292.00, stdev=125.10, samples=20 00:25:43.146 lat (msec) : 2=0.07%, 4=0.57%, 10=1.01%, 20=1.27%, 50=5.06% 00:25:43.146 lat (msec) : 100=8.45%, 250=43.68%, 500=39.89% 00:25:43.146 cpu : usr=0.68%, sys=0.91%, ctx=1521, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,2983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job8: (groupid=0, jobs=1): err= 0: pid=1058440: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=578, BW=145MiB/s (152MB/s)(1469MiB/10159msec); 0 zone resets 00:25:43.146 slat (usec): min=17, max=102463, avg=1219.94, stdev=4031.14 00:25:43.146 clat (msec): min=7, max=476, avg=109.35, stdev=88.37 00:25:43.146 lat (msec): min=7, max=476, avg=110.57, stdev=89.18 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 39], 00:25:43.146 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 69], 60.00th=[ 103], 00:25:43.146 | 70.00th=[ 153], 80.00th=[ 194], 90.00th=[ 241], 95.00th=[ 284], 00:25:43.146 | 99.00th=[ 363], 99.50th=[ 384], 99.90th=[ 456], 99.95th=[ 460], 00:25:43.146 | 99.99th=[ 477] 00:25:43.146 bw ( KiB/s): min=63488, max=426496, per=12.32%, avg=148824.15, stdev=121698.62, samples=20 00:25:43.146 iops : min= 248, max= 1666, avg=581.30, stdev=475.40, samples=20 00:25:43.146 lat (msec) : 10=0.05%, 20=0.48%, 50=46.28%, 100=12.90%, 250=31.73% 00:25:43.146 lat (msec) : 500=8.56% 00:25:43.146 cpu : usr=1.20%, sys=1.35%, ctx=2318, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,5877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job9: (groupid=0, jobs=1): err= 0: pid=1058451: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=456, BW=114MiB/s (120MB/s)(1158MiB/10134msec); 0 zone resets 00:25:43.146 slat (usec): min=20, max=92084, avg=1678.21, stdev=4677.97 00:25:43.146 clat (msec): min=2, max=371, avg=138.25, stdev=96.82 00:25:43.146 lat (msec): min=2, max=371, avg=139.92, stdev=98.09 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 55], 00:25:43.146 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 101], 60.00th=[ 128], 00:25:43.146 | 70.00th=[ 174], 80.00th=[ 243], 90.00th=[ 309], 95.00th=[ 334], 00:25:43.146 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 372], 00:25:43.146 | 99.99th=[ 372] 00:25:43.146 bw ( KiB/s): min=47104, max=251392, per=9.68%, avg=116940.80, stdev=58412.04, samples=20 00:25:43.146 iops : min= 184, max= 982, avg=456.80, stdev=228.17, samples=20 00:25:43.146 lat (msec) : 4=0.39%, 10=1.19%, 20=2.85%, 50=14.36%, 100=31.09% 00:25:43.146 lat (msec) : 250=31.66%, 500=18.46% 00:25:43.146 cpu : usr=1.09%, sys=1.36%, ctx=2213, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,4631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 job10: (groupid=0, jobs=1): err= 0: pid=1058460: Sun Dec 15 13:05:50 2024 00:25:43.146 write: IOPS=502, BW=126MiB/s (132MB/s)(1266MiB/10073msec); 0 zone resets 00:25:43.146 slat (usec): min=25, max=92332, avg=1478.94, stdev=4467.52 00:25:43.146 clat (usec): min=1474, max=413428, avg=125633.17, stdev=93528.16 00:25:43.146 lat (usec): min=1530, max=413493, avg=127112.12, stdev=94746.68 00:25:43.146 clat percentiles (msec): 00:25:43.146 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 37], 20.00th=[ 55], 00:25:43.146 | 30.00th=[ 59], 40.00th=[ 74], 50.00th=[ 91], 60.00th=[ 125], 00:25:43.146 | 70.00th=[ 146], 80.00th=[ 205], 90.00th=[ 249], 95.00th=[ 342], 00:25:43.146 | 99.00th=[ 393], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:25:43.146 | 99.99th=[ 414] 00:25:43.146 bw ( KiB/s): min=43008, max=279040, per=10.60%, avg=127974.40, stdev=78608.52, samples=20 00:25:43.146 iops : min= 168, max= 1090, avg=499.90, stdev=307.06, samples=20 00:25:43.146 lat (msec) : 2=0.04%, 4=0.34%, 10=1.80%, 20=3.83%, 50=7.17% 00:25:43.146 lat (msec) : 100=38.78%, 250=38.07%, 500=9.98% 00:25:43.146 cpu : usr=1.14%, sys=1.53%, ctx=2514, majf=0, minf=1 00:25:43.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:43.146 issued rwts: total=0,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:43.146 00:25:43.146 Run status group 0 (all jobs): 00:25:43.146 WRITE: bw=1180MiB/s (1237MB/s), 73.4MiB/s-145MiB/s (77.0MB/s-152MB/s), io=11.7GiB (12.6GB), run=10072-10159msec 00:25:43.146 00:25:43.146 Disk stats (read/write): 00:25:43.147 nvme0n1: ios=49/6416, merge=0/0, ticks=240/1209547, in_queue=1209787, util=97.69% 00:25:43.147 nvme10n1: ios=52/11514, merge=0/0, ticks=1129/1214154, in_queue=1215283, util=99.94% 00:25:43.147 nvme1n1: ios=45/8344, merge=0/0, ticks=997/1195562, in_queue=1196559, util=99.95% 00:25:43.147 nvme2n1: ios=53/10045, merge=0/0, ticks=2184/1205159, in_queue=1207343, util=99.92% 00:25:43.147 nvme3n1: ios=0/8270, merge=0/0, ticks=0/1218866, in_queue=1218866, util=97.76% 00:25:43.147 nvme4n1: ios=43/6472, merge=0/0, ticks=1930/1205145, in_queue=1207075, util=99.97% 00:25:43.147 nvme5n1: ios=42/6258, merge=0/0, ticks=747/1198549, in_queue=1199296, util=99.99% 00:25:43.147 nvme6n1: ios=0/5813, merge=0/0, ticks=0/1203897, in_queue=1203897, util=98.33% 00:25:43.147 nvme7n1: ios=42/11596, merge=0/0, ticks=3776/1200904, in_queue=1204680, util=100.00% 00:25:43.147 nvme8n1: ios=40/9087, merge=0/0, ticks=2022/1215269, in_queue=1217291, util=99.95% 00:25:43.147 nvme9n1: ios=47/9865, merge=0/0, ticks=989/1218025, in_queue=1219014, util=99.99% 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:43.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.147 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:43.147 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:43.147 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:43.665 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.665 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:43.925 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.925 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:44.184 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.184 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:44.184 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:44.184 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:44.184 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.184 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.184 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:44.443 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:44.443 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:44.443 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:44.443 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:44.444 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.444 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:44.703 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.703 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:44.963 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:44.963 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:44.963 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:45.223 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:45.223 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.223 rmmod nvme_tcp 00:25:45.223 rmmod nvme_fabrics 00:25:45.223 rmmod nvme_keyring 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1050925 ']' 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1050925 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1050925 ']' 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1050925 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.223 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1050925 00:25:45.482 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.482 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.482 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1050925' 00:25:45.482 killing process with pid 1050925 00:25:45.482 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1050925 00:25:45.482 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1050925 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.741 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.280 00:25:48.280 real 1m10.800s 00:25:48.280 user 4m14.797s 00:25:48.280 sys 0m17.957s 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.280 ************************************ 00:25:48.280 END TEST nvmf_multiconnection 00:25:48.280 ************************************ 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:48.280 ************************************ 00:25:48.280 START TEST nvmf_initiator_timeout 00:25:48.280 ************************************ 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:48.280 * Looking for test storage... 00:25:48.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:48.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.280 --rc genhtml_branch_coverage=1 00:25:48.280 --rc genhtml_function_coverage=1 00:25:48.280 --rc genhtml_legend=1 00:25:48.280 --rc geninfo_all_blocks=1 00:25:48.280 --rc geninfo_unexecuted_blocks=1 00:25:48.280 00:25:48.280 ' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:48.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.280 --rc genhtml_branch_coverage=1 00:25:48.280 --rc genhtml_function_coverage=1 00:25:48.280 --rc genhtml_legend=1 00:25:48.280 --rc geninfo_all_blocks=1 00:25:48.280 --rc geninfo_unexecuted_blocks=1 00:25:48.280 00:25:48.280 ' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:48.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.280 --rc genhtml_branch_coverage=1 00:25:48.280 --rc genhtml_function_coverage=1 00:25:48.280 --rc genhtml_legend=1 00:25:48.280 --rc geninfo_all_blocks=1 00:25:48.280 --rc geninfo_unexecuted_blocks=1 00:25:48.280 00:25:48.280 ' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:48.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.280 --rc genhtml_branch_coverage=1 00:25:48.280 --rc genhtml_function_coverage=1 00:25:48.280 --rc genhtml_legend=1 00:25:48.280 --rc geninfo_all_blocks=1 00:25:48.280 --rc geninfo_unexecuted_blocks=1 00:25:48.280 00:25:48.280 ' 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.280 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.281 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:53.729 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:53.729 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:53.729 Found net devices under 0000:af:00.0: cvl_0_0 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:53.729 Found net devices under 0000:af:00.1: cvl_0_1 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.729 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:25:53.989 00:25:53.989 --- 10.0.0.2 ping statistics --- 00:25:53.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.989 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:25:53.989 00:25:53.989 --- 10.0.0.1 ping statistics --- 00:25:53.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.989 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1063728 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1063728 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1063728 ']' 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.989 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.989 [2024-12-15 13:06:01.889495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:53.989 [2024-12-15 13:06:01.889545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.248 [2024-12-15 13:06:01.969521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.248 [2024-12-15 13:06:01.992248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.248 [2024-12-15 13:06:01.992283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.248 [2024-12-15 13:06:01.992292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.248 [2024-12-15 13:06:01.992298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.248 [2024-12-15 13:06:01.992304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.248 [2024-12-15 13:06:01.993627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.248 [2024-12-15 13:06:01.993723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.248 [2024-12-15 13:06:01.993869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.248 [2024-12-15 13:06:01.993870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.248 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 Malloc0 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 Delay0 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 [2024-12-15 13:06:02.177353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 [2024-12-15 13:06:02.210600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.507 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:55.444 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:55.444 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.444 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.444 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.444 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1064231 00:25:57.985 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:57.985 [global] 00:25:57.985 thread=1 00:25:57.985 invalidate=1 00:25:57.985 rw=write 00:25:57.985 time_based=1 00:25:57.985 runtime=60 00:25:57.985 ioengine=libaio 00:25:57.985 direct=1 00:25:57.985 bs=4096 00:25:57.985 iodepth=1 00:25:57.985 norandommap=0 00:25:57.985 numjobs=1 00:25:57.985 00:25:57.985 verify_dump=1 00:25:57.985 verify_backlog=512 00:25:57.985 verify_state_save=0 00:25:57.985 do_verify=1 00:25:57.985 verify=crc32c-intel 00:25:57.985 [job0] 00:25:57.985 filename=/dev/nvme0n1 00:25:57.985 Could not set queue depth (nvme0n1) 00:25:57.985 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:57.985 fio-3.35 00:25:57.985 Starting 1 thread 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.519 true 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.519 true 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.519 true 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.519 true 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.519 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.810 true 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.810 true 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.810 true 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.810 true 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:03.810 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1064231 00:27:00.050 00:27:00.050 job0: (groupid=0, jobs=1): err= 0: pid=1064456: Sun Dec 15 13:07:05 2024 00:27:00.050 read: IOPS=494, BW=1980KiB/s (2027kB/s)(116MiB/60000msec) 00:27:00.050 slat (usec): min=6, max=13772, avg= 8.12, stdev=79.89 00:27:00.050 clat (usec): min=191, max=41357k, avg=1804.43, stdev=240002.45 00:27:00.050 lat (usec): min=198, max=41357k, avg=1812.55, stdev=240002.57 00:27:00.050 clat percentiles (usec): 00:27:00.050 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:27:00.050 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:27:00.050 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 269], 95.00th=[ 277], 00:27:00.051 | 99.00th=[ 429], 99.50th=[ 482], 99.90th=[42206], 99.95th=[42206], 00:27:00.051 | 99.99th=[42206] 00:27:00.051 write: IOPS=502, BW=2011KiB/s (2059kB/s)(118MiB/60000msec); 0 zone resets 00:27:00.051 slat (usec): min=9, max=40837, avg=13.28, stdev=287.15 00:27:00.051 clat (usec): min=137, max=1781, avg=186.72, stdev=39.06 00:27:00.051 lat (usec): min=148, max=41110, avg=200.00, stdev=290.50 00:27:00.051 clat percentiles (usec): 00:27:00.051 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:27:00.051 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:27:00.051 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 241], 00:27:00.051 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 1237], 00:27:00.051 | 99.99th=[ 1647] 00:27:00.051 bw ( KiB/s): min= 3984, max=10568, per=100.00%, avg=8338.29, stdev=1399.35, samples=28 00:27:00.051 iops : min= 996, max= 2642, avg=2084.57, stdev=349.84, samples=28 00:27:00.051 lat (usec) : 250=74.22%, 500=25.55%, 750=0.01%, 1000=0.01% 00:27:00.051 lat (msec) : 2=0.03%, 50=0.19%, >=2000=0.01% 00:27:00.051 cpu : usr=0.54%, sys=1.11%, ctx=59863, majf=0, minf=1 00:27:00.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:00.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.051 issued rwts: total=29696,30162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:00.051 00:27:00.051 Run status group 0 (all jobs): 00:27:00.051 READ: bw=1980KiB/s (2027kB/s), 1980KiB/s-1980KiB/s (2027kB/s-2027kB/s), io=116MiB (122MB), run=60000-60000msec 00:27:00.051 WRITE: bw=2011KiB/s (2059kB/s), 2011KiB/s-2011KiB/s (2059kB/s-2059kB/s), io=118MiB (124MB), run=60000-60000msec 00:27:00.051 00:27:00.051 Disk stats (read/write): 00:27:00.051 nvme0n1: ios=29750/29696, merge=0/0, ticks=13633/5337, in_queue=18970, util=99.93% 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:00.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:00.051 nvmf hotplug test: fio successful as expected 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.051 rmmod nvme_tcp 00:27:00.051 rmmod nvme_fabrics 00:27:00.051 rmmod nvme_keyring 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1063728 ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1063728 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1063728 ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1063728 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1063728 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1063728' 00:27:00.051 killing process with pid 1063728 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1063728 00:27:00.051 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1063728 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.051 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.620 00:27:00.620 real 1m12.513s 00:27:00.620 user 4m21.590s 00:27:00.620 sys 0m7.627s 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.620 ************************************ 00:27:00.620 END TEST nvmf_initiator_timeout 00:27:00.620 ************************************ 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.620 13:07:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:07.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.191 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:07.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:07.192 Found net devices under 0000:af:00.0: cvl_0_0 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:07.192 Found net devices under 0000:af:00.1: cvl_0_1 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:07.192 ************************************ 00:27:07.192 START TEST nvmf_perf_adq 00:27:07.192 ************************************ 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.192 * Looking for test storage... 00:27:07.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:07.192 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:07.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.192 --rc genhtml_branch_coverage=1 00:27:07.192 --rc genhtml_function_coverage=1 00:27:07.192 --rc genhtml_legend=1 00:27:07.192 --rc geninfo_all_blocks=1 00:27:07.192 --rc geninfo_unexecuted_blocks=1 00:27:07.192 00:27:07.192 ' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:07.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.192 --rc genhtml_branch_coverage=1 00:27:07.192 --rc genhtml_function_coverage=1 00:27:07.192 --rc genhtml_legend=1 00:27:07.192 --rc geninfo_all_blocks=1 00:27:07.192 --rc geninfo_unexecuted_blocks=1 00:27:07.192 00:27:07.192 ' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:07.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.192 --rc genhtml_branch_coverage=1 00:27:07.192 --rc genhtml_function_coverage=1 00:27:07.192 --rc genhtml_legend=1 00:27:07.192 --rc geninfo_all_blocks=1 00:27:07.192 --rc geninfo_unexecuted_blocks=1 00:27:07.192 00:27:07.192 ' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:07.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.192 --rc genhtml_branch_coverage=1 00:27:07.192 --rc genhtml_function_coverage=1 00:27:07.192 --rc genhtml_legend=1 00:27:07.192 --rc geninfo_all_blocks=1 00:27:07.192 --rc geninfo_unexecuted_blocks=1 00:27:07.192 00:27:07.192 ' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.192 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.193 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.469 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:12.470 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:12.470 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:12.470 Found net devices under 0000:af:00.0: cvl_0_0 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:12.470 Found net devices under 0000:af:00.1: cvl_0_1 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:12.470 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:13.037 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:16.326 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:21.602 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:21.602 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:21.602 Found net devices under 0000:af:00.0: cvl_0_0 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:21.602 Found net devices under 0000:af:00.1: cvl_0_1 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.602 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:27:21.603 00:27:21.603 --- 10.0.0.2 ping statistics --- 00:27:21.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.603 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:21.603 00:27:21.603 --- 10.0.0.1 ping statistics --- 00:27:21.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.603 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1082472 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1082472 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1082472 ']' 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.603 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 [2024-12-15 13:07:28.943417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:21.603 [2024-12-15 13:07:28.943460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.603 [2024-12-15 13:07:29.023651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.603 [2024-12-15 13:07:29.046489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.603 [2024-12-15 13:07:29.046526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.603 [2024-12-15 13:07:29.046533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.603 [2024-12-15 13:07:29.046539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.603 [2024-12-15 13:07:29.046544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.603 [2024-12-15 13:07:29.047983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.603 [2024-12-15 13:07:29.048089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.603 [2024-12-15 13:07:29.048199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.603 [2024-12-15 13:07:29.048200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 [2024-12-15 13:07:29.256117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 Malloc1 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.603 [2024-12-15 13:07:29.319874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1082501 00:27:21.603 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:21.604 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:23.507 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:23.507 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.507 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.507 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.507 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:23.507 "tick_rate": 2100000000, 00:27:23.507 "poll_groups": [ 00:27:23.507 { 00:27:23.507 "name": "nvmf_tgt_poll_group_000", 00:27:23.507 "admin_qpairs": 1, 00:27:23.507 "io_qpairs": 1, 00:27:23.507 "current_admin_qpairs": 1, 00:27:23.507 "current_io_qpairs": 1, 00:27:23.507 "pending_bdev_io": 0, 00:27:23.507 "completed_nvme_io": 19021, 00:27:23.507 "transports": [ 00:27:23.507 { 00:27:23.507 "trtype": "TCP" 00:27:23.507 } 00:27:23.507 ] 00:27:23.507 }, 00:27:23.507 { 00:27:23.507 "name": "nvmf_tgt_poll_group_001", 00:27:23.507 "admin_qpairs": 0, 00:27:23.507 "io_qpairs": 1, 00:27:23.507 "current_admin_qpairs": 0, 00:27:23.507 "current_io_qpairs": 1, 00:27:23.507 "pending_bdev_io": 0, 00:27:23.507 "completed_nvme_io": 19430, 00:27:23.507 "transports": [ 00:27:23.507 { 00:27:23.507 "trtype": "TCP" 00:27:23.507 } 00:27:23.507 ] 00:27:23.507 }, 00:27:23.507 { 00:27:23.507 "name": "nvmf_tgt_poll_group_002", 00:27:23.507 "admin_qpairs": 0, 00:27:23.507 "io_qpairs": 1, 00:27:23.507 "current_admin_qpairs": 0, 00:27:23.507 "current_io_qpairs": 1, 00:27:23.507 "pending_bdev_io": 0, 00:27:23.507 "completed_nvme_io": 19343, 00:27:23.507 "transports": [ 00:27:23.507 { 00:27:23.507 "trtype": "TCP" 00:27:23.507 } 00:27:23.507 ] 00:27:23.507 }, 00:27:23.507 { 00:27:23.507 "name": "nvmf_tgt_poll_group_003", 00:27:23.507 "admin_qpairs": 0, 00:27:23.507 "io_qpairs": 1, 00:27:23.507 "current_admin_qpairs": 0, 00:27:23.507 "current_io_qpairs": 1, 00:27:23.507 "pending_bdev_io": 0, 00:27:23.507 "completed_nvme_io": 19342, 00:27:23.507 "transports": [ 00:27:23.507 { 00:27:23.507 "trtype": "TCP" 00:27:23.507 } 00:27:23.507 ] 00:27:23.507 } 00:27:23.507 ] 00:27:23.507 }' 00:27:23.508 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:23.508 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:23.508 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:23.508 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:23.508 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1082501 00:27:31.627 Initializing NVMe Controllers 00:27:31.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:31.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:31.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:31.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:31.627 Initialization complete. Launching workers. 00:27:31.627 ======================================================== 00:27:31.627 Latency(us) 00:27:31.627 Device Information : IOPS MiB/s Average min max 00:27:31.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10234.30 39.98 6254.84 2264.15 10861.34 00:27:31.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10327.50 40.34 6197.31 2148.75 13411.68 00:27:31.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10337.20 40.38 6190.75 2382.51 10678.70 00:27:31.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10211.70 39.89 6266.90 2337.26 13323.94 00:27:31.627 ======================================================== 00:27:31.627 Total : 41110.68 160.59 6227.27 2148.75 13411.68 00:27:31.627 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.627 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.627 rmmod nvme_tcp 00:27:31.627 rmmod nvme_fabrics 00:27:31.627 rmmod nvme_keyring 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1082472 ']' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1082472 ']' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082472' 00:27:31.886 killing process with pid 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1082472 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.886 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.422 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.422 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:34.422 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:34.422 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:35.370 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:37.905 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.179 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:43.180 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:43.180 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:43.180 Found net devices under 0000:af:00.0: cvl_0_0 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:43.180 Found net devices under 0000:af:00.1: cvl_0_1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:27:43.180 00:27:43.180 --- 10.0.0.2 ping statistics --- 00:27:43.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.180 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:27:43.180 00:27:43.180 --- 10.0.0.1 ping statistics --- 00:27:43.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.180 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:43.180 net.core.busy_poll = 1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:43.180 net.core.busy_read = 1 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:43.180 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:43.180 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:43.180 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:43.181 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1086415 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1086415 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1086415 ']' 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.440 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.440 [2024-12-15 13:07:51.182112] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:43.440 [2024-12-15 13:07:51.182166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.440 [2024-12-15 13:07:51.261015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.440 [2024-12-15 13:07:51.284515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.440 [2024-12-15 13:07:51.284554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.440 [2024-12-15 13:07:51.284562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.440 [2024-12-15 13:07:51.284568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.440 [2024-12-15 13:07:51.284573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.440 [2024-12-15 13:07:51.286027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.440 [2024-12-15 13:07:51.286135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.440 [2024-12-15 13:07:51.286242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.440 [2024-12-15 13:07:51.286243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 [2024-12-15 13:07:51.515154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 Malloc1 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.699 [2024-12-15 13:07:51.579865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1086544 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:43.699 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:46.233 "tick_rate": 2100000000, 00:27:46.233 "poll_groups": [ 00:27:46.233 { 00:27:46.233 "name": "nvmf_tgt_poll_group_000", 00:27:46.233 "admin_qpairs": 1, 00:27:46.233 "io_qpairs": 2, 00:27:46.233 "current_admin_qpairs": 1, 00:27:46.233 "current_io_qpairs": 2, 00:27:46.233 "pending_bdev_io": 0, 00:27:46.233 "completed_nvme_io": 28281, 00:27:46.233 "transports": [ 00:27:46.233 { 00:27:46.233 "trtype": "TCP" 00:27:46.233 } 00:27:46.233 ] 00:27:46.233 }, 00:27:46.233 { 00:27:46.233 "name": "nvmf_tgt_poll_group_001", 00:27:46.233 "admin_qpairs": 0, 00:27:46.233 "io_qpairs": 2, 00:27:46.233 "current_admin_qpairs": 0, 00:27:46.233 "current_io_qpairs": 2, 00:27:46.233 "pending_bdev_io": 0, 00:27:46.233 "completed_nvme_io": 28995, 00:27:46.233 "transports": [ 00:27:46.233 { 00:27:46.233 "trtype": "TCP" 00:27:46.233 } 00:27:46.233 ] 00:27:46.233 }, 00:27:46.233 { 00:27:46.233 "name": "nvmf_tgt_poll_group_002", 00:27:46.233 "admin_qpairs": 0, 00:27:46.233 "io_qpairs": 0, 00:27:46.233 "current_admin_qpairs": 0, 00:27:46.233 "current_io_qpairs": 0, 00:27:46.233 "pending_bdev_io": 0, 00:27:46.233 "completed_nvme_io": 0, 00:27:46.233 "transports": [ 00:27:46.233 { 00:27:46.233 "trtype": "TCP" 00:27:46.233 } 00:27:46.233 ] 00:27:46.233 }, 00:27:46.233 { 00:27:46.233 "name": "nvmf_tgt_poll_group_003", 00:27:46.233 "admin_qpairs": 0, 00:27:46.233 "io_qpairs": 0, 00:27:46.233 "current_admin_qpairs": 0, 00:27:46.233 "current_io_qpairs": 0, 00:27:46.233 "pending_bdev_io": 0, 00:27:46.233 "completed_nvme_io": 0, 00:27:46.233 "transports": [ 00:27:46.233 { 00:27:46.233 "trtype": "TCP" 00:27:46.233 } 00:27:46.233 ] 00:27:46.233 } 00:27:46.233 ] 00:27:46.233 }' 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:27:46.233 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1086544 00:27:54.349 Initializing NVMe Controllers 00:27:54.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:54.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:54.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:54.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:54.350 Initialization complete. Launching workers. 00:27:54.350 ======================================================== 00:27:54.350 Latency(us) 00:27:54.350 Device Information : IOPS MiB/s Average min max 00:27:54.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6586.19 25.73 9716.04 1479.02 52973.72 00:27:54.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7456.97 29.13 8581.11 1052.27 52561.10 00:27:54.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8268.66 32.30 7766.15 1351.66 53742.85 00:27:54.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7372.17 28.80 8680.83 1549.60 52535.99 00:27:54.350 ======================================================== 00:27:54.350 Total : 29683.99 115.95 8630.68 1052.27 53742.85 00:27:54.350 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.350 rmmod nvme_tcp 00:27:54.350 rmmod nvme_fabrics 00:27:54.350 rmmod nvme_keyring 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1086415 ']' 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1086415 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1086415 ']' 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1086415 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086415 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086415' 00:27:54.350 killing process with pid 1086415 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1086415 00:27:54.350 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1086415 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.350 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.255 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.255 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:56.255 00:27:56.255 real 0m50.247s 00:27:56.255 user 2m44.026s 00:27:56.255 sys 0m10.031s 00:27:56.255 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.255 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.255 ************************************ 00:27:56.255 END TEST nvmf_perf_adq 00:27:56.255 ************************************ 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:56.515 ************************************ 00:27:56.515 START TEST nvmf_shutdown 00:27:56.515 ************************************ 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:56.515 * Looking for test storage... 00:27:56.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.515 --rc genhtml_branch_coverage=1 00:27:56.515 --rc genhtml_function_coverage=1 00:27:56.515 --rc genhtml_legend=1 00:27:56.515 --rc geninfo_all_blocks=1 00:27:56.515 --rc geninfo_unexecuted_blocks=1 00:27:56.515 00:27:56.515 ' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.515 --rc genhtml_branch_coverage=1 00:27:56.515 --rc genhtml_function_coverage=1 00:27:56.515 --rc genhtml_legend=1 00:27:56.515 --rc geninfo_all_blocks=1 00:27:56.515 --rc geninfo_unexecuted_blocks=1 00:27:56.515 00:27:56.515 ' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.515 --rc genhtml_branch_coverage=1 00:27:56.515 --rc genhtml_function_coverage=1 00:27:56.515 --rc genhtml_legend=1 00:27:56.515 --rc geninfo_all_blocks=1 00:27:56.515 --rc geninfo_unexecuted_blocks=1 00:27:56.515 00:27:56.515 ' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.515 --rc genhtml_branch_coverage=1 00:27:56.515 --rc genhtml_function_coverage=1 00:27:56.515 --rc genhtml_legend=1 00:27:56.515 --rc geninfo_all_blocks=1 00:27:56.515 --rc geninfo_unexecuted_blocks=1 00:27:56.515 00:27:56.515 ' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.515 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.516 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.516 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:56.516 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.516 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:56.516 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:56.775 ************************************ 00:27:56.775 START TEST nvmf_shutdown_tc1 00:27:56.775 ************************************ 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.775 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.355 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:03.356 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:03.356 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:03.356 Found net devices under 0000:af:00.0: cvl_0_0 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:03.356 Found net devices under 0000:af:00.1: cvl_0_1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:03.356 00:28:03.356 --- 10.0.0.2 ping statistics --- 00:28:03.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.356 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:03.356 00:28:03.356 --- 10.0.0.1 ping statistics --- 00:28:03.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.356 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1091660 00:28:03.356 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1091660 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1091660 ']' 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 [2024-12-15 13:08:10.460331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:03.357 [2024-12-15 13:08:10.460386] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.357 [2024-12-15 13:08:10.540440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.357 [2024-12-15 13:08:10.563446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.357 [2024-12-15 13:08:10.563485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.357 [2024-12-15 13:08:10.563493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.357 [2024-12-15 13:08:10.563499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.357 [2024-12-15 13:08:10.563504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.357 [2024-12-15 13:08:10.564883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.357 [2024-12-15 13:08:10.564990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.357 [2024-12-15 13:08:10.565076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.357 [2024-12-15 13:08:10.565077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 [2024-12-15 13:08:10.704872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.357 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 Malloc1 00:28:03.357 [2024-12-15 13:08:10.828729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.357 Malloc2 00:28:03.357 Malloc3 00:28:03.357 Malloc4 00:28:03.357 Malloc5 00:28:03.357 Malloc6 00:28:03.357 Malloc7 00:28:03.357 Malloc8 00:28:03.357 Malloc9 00:28:03.357 Malloc10 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1091930 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1091930 /var/tmp/bdevperf.sock 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1091930 ']' 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:03.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.357 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.357 { 00:28:03.357 "params": { 00:28:03.357 "name": "Nvme$subsystem", 00:28:03.357 "trtype": "$TEST_TRANSPORT", 00:28:03.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.357 "adrfam": "ipv4", 00:28:03.357 "trsvcid": "$NVMF_PORT", 00:28:03.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.357 "hdgst": ${hdgst:-false}, 00:28:03.357 "ddgst": ${ddgst:-false} 00:28:03.357 }, 00:28:03.357 "method": "bdev_nvme_attach_controller" 00:28:03.357 } 00:28:03.357 EOF 00:28:03.357 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 [2024-12-15 13:08:11.302808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:03.617 [2024-12-15 13:08:11.302862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.617 { 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme$subsystem", 00:28:03.617 "trtype": "$TEST_TRANSPORT", 00:28:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "$NVMF_PORT", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.617 "hdgst": ${hdgst:-false}, 00:28:03.617 "ddgst": ${ddgst:-false} 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 } 00:28:03.617 EOF 00:28:03.617 )") 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:03.617 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme1", 00:28:03.617 "trtype": "tcp", 00:28:03.617 "traddr": "10.0.0.2", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "4420", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.617 "hdgst": false, 00:28:03.617 "ddgst": false 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 },{ 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme2", 00:28:03.617 "trtype": "tcp", 00:28:03.617 "traddr": "10.0.0.2", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "4420", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.617 "hdgst": false, 00:28:03.617 "ddgst": false 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 },{ 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme3", 00:28:03.617 "trtype": "tcp", 00:28:03.617 "traddr": "10.0.0.2", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "4420", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:03.617 "hdgst": false, 00:28:03.617 "ddgst": false 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 },{ 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme4", 00:28:03.617 "trtype": "tcp", 00:28:03.617 "traddr": "10.0.0.2", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "4420", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:03.617 "hdgst": false, 00:28:03.617 "ddgst": false 00:28:03.617 }, 00:28:03.617 "method": "bdev_nvme_attach_controller" 00:28:03.617 },{ 00:28:03.617 "params": { 00:28:03.617 "name": "Nvme5", 00:28:03.617 "trtype": "tcp", 00:28:03.617 "traddr": "10.0.0.2", 00:28:03.617 "adrfam": "ipv4", 00:28:03.617 "trsvcid": "4420", 00:28:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:03.617 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 },{ 00:28:03.618 "params": { 00:28:03.618 "name": "Nvme6", 00:28:03.618 "trtype": "tcp", 00:28:03.618 "traddr": "10.0.0.2", 00:28:03.618 "adrfam": "ipv4", 00:28:03.618 "trsvcid": "4420", 00:28:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:03.618 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 },{ 00:28:03.618 "params": { 00:28:03.618 "name": "Nvme7", 00:28:03.618 "trtype": "tcp", 00:28:03.618 "traddr": "10.0.0.2", 00:28:03.618 "adrfam": "ipv4", 00:28:03.618 "trsvcid": "4420", 00:28:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:03.618 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 },{ 00:28:03.618 "params": { 00:28:03.618 "name": "Nvme8", 00:28:03.618 "trtype": "tcp", 00:28:03.618 "traddr": "10.0.0.2", 00:28:03.618 "adrfam": "ipv4", 00:28:03.618 "trsvcid": "4420", 00:28:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:03.618 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 },{ 00:28:03.618 "params": { 00:28:03.618 "name": "Nvme9", 00:28:03.618 "trtype": "tcp", 00:28:03.618 "traddr": "10.0.0.2", 00:28:03.618 "adrfam": "ipv4", 00:28:03.618 "trsvcid": "4420", 00:28:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:03.618 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 },{ 00:28:03.618 "params": { 00:28:03.618 "name": "Nvme10", 00:28:03.618 "trtype": "tcp", 00:28:03.618 "traddr": "10.0.0.2", 00:28:03.618 "adrfam": "ipv4", 00:28:03.618 "trsvcid": "4420", 00:28:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:03.618 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:03.618 "hdgst": false, 00:28:03.618 "ddgst": false 00:28:03.618 }, 00:28:03.618 "method": "bdev_nvme_attach_controller" 00:28:03.618 }' 00:28:03.618 [2024-12-15 13:08:11.379231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.618 [2024-12-15 13:08:11.401590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1091930 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:05.523 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:06.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1091930 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1091660 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.461 "hdgst": ${hdgst:-false}, 00:28:06.461 "ddgst": ${ddgst:-false} 00:28:06.461 }, 00:28:06.461 "method": "bdev_nvme_attach_controller" 00:28:06.461 } 00:28:06.461 EOF 00:28:06.461 )") 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.461 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.461 { 00:28:06.461 "params": { 00:28:06.461 "name": "Nvme$subsystem", 00:28:06.461 "trtype": "$TEST_TRANSPORT", 00:28:06.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.461 "adrfam": "ipv4", 00:28:06.461 "trsvcid": "$NVMF_PORT", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.462 "hdgst": ${hdgst:-false}, 00:28:06.462 "ddgst": ${ddgst:-false} 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 } 00:28:06.462 EOF 00:28:06.462 )") 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.462 [2024-12-15 13:08:14.228880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:06.462 [2024-12-15 13:08:14.228930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092404 ] 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.462 { 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme$subsystem", 00:28:06.462 "trtype": "$TEST_TRANSPORT", 00:28:06.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "$NVMF_PORT", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.462 "hdgst": ${hdgst:-false}, 00:28:06.462 "ddgst": ${ddgst:-false} 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 } 00:28:06.462 EOF 00:28:06.462 )") 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.462 { 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme$subsystem", 00:28:06.462 "trtype": "$TEST_TRANSPORT", 00:28:06.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "$NVMF_PORT", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.462 "hdgst": ${hdgst:-false}, 00:28:06.462 "ddgst": ${ddgst:-false} 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 } 00:28:06.462 EOF 00:28:06.462 )") 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.462 { 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme$subsystem", 00:28:06.462 "trtype": "$TEST_TRANSPORT", 00:28:06.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "$NVMF_PORT", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.462 "hdgst": ${hdgst:-false}, 00:28:06.462 "ddgst": ${ddgst:-false} 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 } 00:28:06.462 EOF 00:28:06.462 )") 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:06.462 13:08:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme1", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme2", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme3", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme4", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme5", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme6", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme7", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme8", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme9", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 },{ 00:28:06.462 "params": { 00:28:06.462 "name": "Nvme10", 00:28:06.462 "trtype": "tcp", 00:28:06.462 "traddr": "10.0.0.2", 00:28:06.462 "adrfam": "ipv4", 00:28:06.462 "trsvcid": "4420", 00:28:06.462 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:06.462 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:06.462 "hdgst": false, 00:28:06.462 "ddgst": false 00:28:06.462 }, 00:28:06.462 "method": "bdev_nvme_attach_controller" 00:28:06.462 }' 00:28:06.462 [2024-12-15 13:08:14.308614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.462 [2024-12-15 13:08:14.331184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.837 Running I/O for 1 seconds... 00:28:09.212 2258.00 IOPS, 141.12 MiB/s 00:28:09.212 Latency(us) 00:28:09.212 [2024-12-15T12:08:17.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.212 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme1n1 : 1.15 286.53 17.91 0.00 0.00 219312.25 7895.53 198730.12 00:28:09.212 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme2n1 : 1.15 278.17 17.39 0.00 0.00 224835.68 16727.28 219701.64 00:28:09.212 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme3n1 : 1.13 285.53 17.85 0.00 0.00 211239.36 15978.30 212711.13 00:28:09.212 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme4n1 : 1.14 285.16 17.82 0.00 0.00 211634.85 12545.46 203723.34 00:28:09.212 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme5n1 : 1.17 274.56 17.16 0.00 0.00 218665.84 15416.56 225693.50 00:28:09.212 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme6n1 : 1.17 274.00 17.13 0.00 0.00 216068.68 17101.78 223696.21 00:28:09.212 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme7n1 : 1.15 277.13 17.32 0.00 0.00 210173.95 14854.83 212711.13 00:28:09.212 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme8n1 : 1.16 275.81 17.24 0.00 0.00 208398.43 12795.12 228689.43 00:28:09.212 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme9n1 : 1.17 272.59 17.04 0.00 0.00 207279.45 19473.55 223696.21 00:28:09.212 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:09.212 Verification LBA range: start 0x0 length 0x400 00:28:09.212 Nvme10n1 : 1.17 272.75 17.05 0.00 0.00 204855.78 17725.93 239674.51 00:28:09.212 [2024-12-15T12:08:17.119Z] =================================================================================================================== 00:28:09.212 [2024-12-15T12:08:17.119Z] Total : 2782.24 173.89 0.00 0.00 213260.78 7895.53 239674.51 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.212 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.212 rmmod nvme_tcp 00:28:09.212 rmmod nvme_fabrics 00:28:09.212 rmmod nvme_keyring 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1091660 ']' 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1091660 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1091660 ']' 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1091660 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091660 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091660' 00:28:09.472 killing process with pid 1091660 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1091660 00:28:09.472 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1091660 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.731 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.266 00:28:12.266 real 0m15.145s 00:28:12.266 user 0m33.761s 00:28:12.266 sys 0m5.666s 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:12.266 ************************************ 00:28:12.266 END TEST nvmf_shutdown_tc1 00:28:12.266 ************************************ 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.266 ************************************ 00:28:12.266 START TEST nvmf_shutdown_tc2 00:28:12.266 ************************************ 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.266 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.267 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.267 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.267 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.267 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.267 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:28:12.268 00:28:12.268 --- 10.0.0.2 ping statistics --- 00:28:12.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.268 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:28:12.268 00:28:12.268 --- 10.0.0.1 ping statistics --- 00:28:12.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.268 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1093415 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1093415 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093415 ']' 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.268 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.268 [2024-12-15 13:08:20.037710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:12.268 [2024-12-15 13:08:20.037760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.268 [2024-12-15 13:08:20.117393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.268 [2024-12-15 13:08:20.139870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.268 [2024-12-15 13:08:20.139910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.268 [2024-12-15 13:08:20.139917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.268 [2024-12-15 13:08:20.139923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.268 [2024-12-15 13:08:20.139928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.268 [2024-12-15 13:08:20.141441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.268 [2024-12-15 13:08:20.141548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.268 [2024-12-15 13:08:20.141656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.268 [2024-12-15 13:08:20.141657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.527 [2024-12-15 13:08:20.272785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.527 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.528 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:12.528 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:12.528 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:12.528 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.528 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.528 Malloc1 00:28:12.528 [2024-12-15 13:08:20.376333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.528 Malloc2 00:28:12.787 Malloc3 00:28:12.787 Malloc4 00:28:12.787 Malloc5 00:28:12.787 Malloc6 00:28:12.787 Malloc7 00:28:12.787 Malloc8 00:28:13.046 Malloc9 00:28:13.046 Malloc10 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1093480 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1093480 /var/tmp/bdevperf.sock 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093480 ']' 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:13.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.046 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 [2024-12-15 13:08:20.843883] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:13.047 [2024-12-15 13:08:20.843934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093480 ] 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.047 { 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme$subsystem", 00:28:13.047 "trtype": "$TEST_TRANSPORT", 00:28:13.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "$NVMF_PORT", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.047 "hdgst": ${hdgst:-false}, 00:28:13.047 "ddgst": ${ddgst:-false} 00:28:13.047 }, 00:28:13.047 "method": "bdev_nvme_attach_controller" 00:28:13.047 } 00:28:13.047 EOF 00:28:13.047 )") 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:13.047 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:13.047 "params": { 00:28:13.047 "name": "Nvme1", 00:28:13.047 "trtype": "tcp", 00:28:13.047 "traddr": "10.0.0.2", 00:28:13.047 "adrfam": "ipv4", 00:28:13.047 "trsvcid": "4420", 00:28:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.047 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme2", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme3", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme4", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme5", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme6", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme7", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme8", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme9", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 },{ 00:28:13.048 "params": { 00:28:13.048 "name": "Nvme10", 00:28:13.048 "trtype": "tcp", 00:28:13.048 "traddr": "10.0.0.2", 00:28:13.048 "adrfam": "ipv4", 00:28:13.048 "trsvcid": "4420", 00:28:13.048 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:13.048 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:13.048 "hdgst": false, 00:28:13.048 "ddgst": false 00:28:13.048 }, 00:28:13.048 "method": "bdev_nvme_attach_controller" 00:28:13.048 }' 00:28:13.048 [2024-12-15 13:08:20.923702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.048 [2024-12-15 13:08:20.946177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.426 Running I/O for 10 seconds... 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1093480 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093480 ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093480 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093480 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093480' 00:28:14.994 killing process with pid 1093480 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093480 00:28:14.994 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093480 00:28:15.253 Received shutdown signal, test time was about 0.711827 seconds 00:28:15.253 00:28:15.253 Latency(us) 00:28:15.253 [2024-12-15T12:08:23.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme1n1 : 0.70 275.44 17.21 0.00 0.00 229161.45 32705.58 203723.34 00:28:15.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme2n1 : 0.71 272.24 17.02 0.00 0.00 226629.40 19598.38 222697.57 00:28:15.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme3n1 : 0.68 282.42 17.65 0.00 0.00 212932.19 14542.75 212711.13 00:28:15.253 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme4n1 : 0.69 312.15 19.51 0.00 0.00 184247.62 11796.48 215707.06 00:28:15.253 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme5n1 : 0.69 279.69 17.48 0.00 0.00 204693.21 15042.07 212711.13 00:28:15.253 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.253 Verification LBA range: start 0x0 length 0x400 00:28:15.253 Nvme6n1 : 0.70 276.26 17.27 0.00 0.00 202590.76 16227.96 228689.43 00:28:15.254 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.254 Verification LBA range: start 0x0 length 0x400 00:28:15.254 Nvme7n1 : 0.70 273.72 17.11 0.00 0.00 199275.11 15478.98 197731.47 00:28:15.254 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.254 Verification LBA range: start 0x0 length 0x400 00:28:15.254 Nvme8n1 : 0.70 280.84 17.55 0.00 0.00 187198.11 6990.51 144803.35 00:28:15.254 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.254 Verification LBA range: start 0x0 length 0x400 00:28:15.254 Nvme9n1 : 0.71 271.09 16.94 0.00 0.00 191616.73 18100.42 215707.06 00:28:15.254 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.254 Verification LBA range: start 0x0 length 0x400 00:28:15.254 Nvme10n1 : 0.71 269.97 16.87 0.00 0.00 187583.80 19223.89 226692.14 00:28:15.254 [2024-12-15T12:08:23.161Z] =================================================================================================================== 00:28:15.254 [2024-12-15T12:08:23.161Z] Total : 2793.81 174.61 0.00 0.00 202335.21 6990.51 228689.43 00:28:15.254 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.631 rmmod nvme_tcp 00:28:16.631 rmmod nvme_fabrics 00:28:16.631 rmmod nvme_keyring 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1093415 ']' 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093415 ']' 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093415' 00:28:16.631 killing process with pid 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093415 00:28:16.631 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093415 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.904 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.000 00:28:19.000 real 0m6.981s 00:28:19.000 user 0m20.027s 00:28:19.000 sys 0m1.257s 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:19.000 ************************************ 00:28:19.000 END TEST nvmf_shutdown_tc2 00:28:19.000 ************************************ 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.000 ************************************ 00:28:19.000 START TEST nvmf_shutdown_tc3 00:28:19.000 ************************************ 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:19.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:19.000 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:19.000 Found net devices under 0000:af:00.0: cvl_0_0 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.000 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:19.001 Found net devices under 0000:af:00.1: cvl_0_1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.001 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.260 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.260 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.260 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.260 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:28:19.260 00:28:19.260 --- 10.0.0.2 ping statistics --- 00:28:19.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.260 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:28:19.260 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:19.260 00:28:19.260 --- 10.0.0.1 ping statistics --- 00:28:19.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.260 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.260 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1094708 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1094708 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1094708 ']' 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.261 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.261 [2024-12-15 13:08:27.103496] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:19.261 [2024-12-15 13:08:27.103546] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.520 [2024-12-15 13:08:27.185453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.520 [2024-12-15 13:08:27.208329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.520 [2024-12-15 13:08:27.208367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.520 [2024-12-15 13:08:27.208374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.520 [2024-12-15 13:08:27.208381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.520 [2024-12-15 13:08:27.208386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.520 [2024-12-15 13:08:27.209780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.520 [2024-12-15 13:08:27.209876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.520 [2024-12-15 13:08:27.209983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.520 [2024-12-15 13:08:27.209984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.520 [2024-12-15 13:08:27.342201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.520 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.521 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.780 Malloc1 00:28:19.780 [2024-12-15 13:08:27.451932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.780 Malloc2 00:28:19.780 Malloc3 00:28:19.780 Malloc4 00:28:19.780 Malloc5 00:28:19.780 Malloc6 00:28:19.780 Malloc7 00:28:20.039 Malloc8 00:28:20.039 Malloc9 00:28:20.039 Malloc10 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1094763 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1094763 /var/tmp/bdevperf.sock 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1094763 ']' 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:20.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.039 { 00:28:20.039 "params": { 00:28:20.039 "name": "Nvme$subsystem", 00:28:20.039 "trtype": "$TEST_TRANSPORT", 00:28:20.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.039 "adrfam": "ipv4", 00:28:20.039 "trsvcid": "$NVMF_PORT", 00:28:20.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.039 "hdgst": ${hdgst:-false}, 00:28:20.039 "ddgst": ${ddgst:-false} 00:28:20.039 }, 00:28:20.039 "method": "bdev_nvme_attach_controller" 00:28:20.039 } 00:28:20.039 EOF 00:28:20.039 )") 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.039 { 00:28:20.039 "params": { 00:28:20.039 "name": "Nvme$subsystem", 00:28:20.039 "trtype": "$TEST_TRANSPORT", 00:28:20.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.039 "adrfam": "ipv4", 00:28:20.039 "trsvcid": "$NVMF_PORT", 00:28:20.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.039 "hdgst": ${hdgst:-false}, 00:28:20.039 "ddgst": ${ddgst:-false} 00:28:20.039 }, 00:28:20.039 "method": "bdev_nvme_attach_controller" 00:28:20.039 } 00:28:20.039 EOF 00:28:20.039 )") 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.039 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.039 { 00:28:20.039 "params": { 00:28:20.039 "name": "Nvme$subsystem", 00:28:20.039 "trtype": "$TEST_TRANSPORT", 00:28:20.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.039 "adrfam": "ipv4", 00:28:20.039 "trsvcid": "$NVMF_PORT", 00:28:20.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.039 "hdgst": ${hdgst:-false}, 00:28:20.039 "ddgst": ${ddgst:-false} 00:28:20.039 }, 00:28:20.039 "method": "bdev_nvme_attach_controller" 00:28:20.039 } 00:28:20.039 EOF 00:28:20.039 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 [2024-12-15 13:08:27.919724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:20.040 [2024-12-15 13:08:27.919772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094763 ] 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.040 { 00:28:20.040 "params": { 00:28:20.040 "name": "Nvme$subsystem", 00:28:20.040 "trtype": "$TEST_TRANSPORT", 00:28:20.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.040 "adrfam": "ipv4", 00:28:20.040 "trsvcid": "$NVMF_PORT", 00:28:20.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.040 "hdgst": ${hdgst:-false}, 00:28:20.040 "ddgst": ${ddgst:-false} 00:28:20.040 }, 00:28:20.040 "method": "bdev_nvme_attach_controller" 00:28:20.040 } 00:28:20.040 EOF 00:28:20.040 )") 00:28:20.040 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:20.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:20.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:20.300 13:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme1", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme2", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme3", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme4", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme5", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme6", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme7", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme8", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme9", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 },{ 00:28:20.300 "params": { 00:28:20.300 "name": "Nvme10", 00:28:20.300 "trtype": "tcp", 00:28:20.300 "traddr": "10.0.0.2", 00:28:20.300 "adrfam": "ipv4", 00:28:20.300 "trsvcid": "4420", 00:28:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:20.300 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:20.300 "hdgst": false, 00:28:20.300 "ddgst": false 00:28:20.300 }, 00:28:20.300 "method": "bdev_nvme_attach_controller" 00:28:20.300 }' 00:28:20.300 [2024-12-15 13:08:27.996372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.300 [2024-12-15 13:08:28.019249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.679 Running I/O for 10 seconds... 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:21.938 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.939 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=11 00:28:22.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 11 -ge 100 ']' 00:28:22.197 13:08:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1094708 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1094708 ']' 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1094708 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094708 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094708' 00:28:22.472 killing process with pid 1094708 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1094708 00:28:22.472 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1094708 00:28:22.473 [2024-12-15 13:08:30.268699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.268995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.269173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85af00 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.473 [2024-12-15 13:08:30.270444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.270713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d980 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.272999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.474 [2024-12-15 13:08:30.273079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.273231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b8c0 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.274968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c280 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.275910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.475 [2024-12-15 13:08:30.275927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.275993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.276368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c770 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.476 [2024-12-15 13:08:30.277590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.277888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85caf0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.477 [2024-12-15 13:08:30.278852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.278994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85cfc0 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc335c0 is same w[2024-12-15 13:08:30.279770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tith the state(6) to be set 00:28:22.478 he state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-15 13:08:30.279818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 he state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.478 [2024-12-15 13:08:30.279870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.478 [2024-12-15 13:08:30.279878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23970 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.478 [2024-12-15 13:08:30.279899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.279913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.279920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.279929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.279938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.279946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.279954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-15 13:08:30.279961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 he state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-15 13:08:30.279970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 he state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe96e0 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.279997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with t[2024-12-15 13:08:30.280023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:28:22.479 id:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1610 is same w[2024-12-15 13:08:30.280076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tith the state(6) to be set 00:28:22.479 he state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-15 13:08:30.280140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 he state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-15 13:08:30.280149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 he state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d490 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1300 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c5cd0 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c6140 is same with the state(6) to be set 00:28:22.479 [2024-12-15 13:08:30.280365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.479 [2024-12-15 13:08:30.280387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.479 [2024-12-15 13:08:30.280394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4440 is same with the state(6) to be set 00:28:22.480 [2024-12-15 13:08:30.280447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23750 is same with the state(6) to be set 00:28:22.480 [2024-12-15 13:08:30.280527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.480 [2024-12-15 13:08:30.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.280594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1dc0 is same with the state(6) to be set 00:28:22.480 [2024-12-15 13:08:30.281063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.480 [2024-12-15 13:08:30.281540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.480 [2024-12-15 13:08:30.281547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.281989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.281996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.481 [2024-12-15 13:08:30.282081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.481 [2024-12-15 13:08:30.282087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcac10 is same with the state(6) to be set 00:28:22.482 [2024-12-15 13:08:30.282530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.282998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.482 [2024-12-15 13:08:30.283098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.482 [2024-12-15 13:08:30.283106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.283541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.283547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.284781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:22.483 [2024-12-15 13:08:30.284817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1dc0 (9): Bad file descriptor 00:28:22.483 [2024-12-15 13:08:30.286113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:22.483 [2024-12-15 13:08:30.286145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5cd0 (9): Bad file descriptor 00:28:22.483 [2024-12-15 13:08:30.286516] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.483 [2024-12-15 13:08:30.286568] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.483 [2024-12-15 13:08:30.286748] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.483 [2024-12-15 13:08:30.286889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.483 [2024-12-15 13:08:30.286905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1dc0 with addr=10.0.0.2, port=4420 00:28:22.483 [2024-12-15 13:08:30.286915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1dc0 is same with the state(6) to be set 00:28:22.483 [2024-12-15 13:08:30.286978] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.483 [2024-12-15 13:08:30.287242] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.483 [2024-12-15 13:08:30.287306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.483 [2024-12-15 13:08:30.287403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.483 [2024-12-15 13:08:30.287412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.287986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.288002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.288010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.288016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.288025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.484 [2024-12-15 13:08:30.288031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.484 [2024-12-15 13:08:30.288040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ef0 is same with the state(6) to be set 00:28:22.485 [2024-12-15 13:08:30.288443] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.485 [2024-12-15 13:08:30.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.485 [2024-12-15 13:08:30.288642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c5cd0 with addr=10.0.0.2, port=4420 00:28:22.485 [2024-12-15 13:08:30.288651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c5cd0 is same with the state(6) to be set 00:28:22.485 [2024-12-15 13:08:30.288662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1dc0 (9): Bad file descriptor 00:28:22.485 [2024-12-15 13:08:30.288733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.288991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.289014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.289024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.289033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.485 [2024-12-15 13:08:30.289040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.485 [2024-12-15 13:08:30.289049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.486 [2024-12-15 13:08:30.289585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.486 [2024-12-15 13:08:30.289592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.289757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c75b0 is same with the state(6) to be set 00:28:22.487 [2024-12-15 13:08:30.290732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:22.487 [2024-12-15 13:08:30.290753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc335c0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5cd0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.290782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.290791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.290799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.290821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23970 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe96e0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1610 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1300 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c6140 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4440 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.290921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23750 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.291905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:22.487 [2024-12-15 13:08:30.291936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.291943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.291951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.291958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.292460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-15 13:08:30.292475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc335c0 with addr=10.0.0.2, port=4420 00:28:22.487 [2024-12-15 13:08:30.292484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc335c0 is same with the state(6) to be set 00:28:22.487 [2024-12-15 13:08:30.292646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-15 13:08:30.292657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1610 with addr=10.0.0.2, port=4420 00:28:22.487 [2024-12-15 13:08:30.292664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1610 is same with the state(6) to be set 00:28:22.487 [2024-12-15 13:08:30.292917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc335c0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.292930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1610 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.292972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.292981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.292988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.292995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.293003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.293009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.293017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.293023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.296251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:22.487 [2024-12-15 13:08:30.296547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-15 13:08:30.296563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1dc0 with addr=10.0.0.2, port=4420 00:28:22.487 [2024-12-15 13:08:30.296572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1dc0 is same with the state(6) to be set 00:28:22.487 [2024-12-15 13:08:30.296603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1dc0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.296634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.296641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.296649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.296659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.297031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:22.487 [2024-12-15 13:08:30.297262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.487 [2024-12-15 13:08:30.297277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c5cd0 with addr=10.0.0.2, port=4420 00:28:22.487 [2024-12-15 13:08:30.297285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c5cd0 is same with the state(6) to be set 00:28:22.487 [2024-12-15 13:08:30.297315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5cd0 (9): Bad file descriptor 00:28:22.487 [2024-12-15 13:08:30.297346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:22.487 [2024-12-15 13:08:30.297354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:22.487 [2024-12-15 13:08:30.297362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:22.487 [2024-12-15 13:08:30.297369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:22.487 [2024-12-15 13:08:30.300869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.300898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.300911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.300918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.300927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.300934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.300943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.487 [2024-12-15 13:08:30.300950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.487 [2024-12-15 13:08:30.300958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.300965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.300974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.300990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.300998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.488 [2024-12-15 13:08:30.301586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.488 [2024-12-15 13:08:30.301593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.301913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.301921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb840 is same with the state(6) to be set 00:28:22.489 [2024-12-15 13:08:30.302947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.302964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.302975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.302982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.302992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.302999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.489 [2024-12-15 13:08:30.303273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.489 [2024-12-15 13:08:30.303282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.490 [2024-12-15 13:08:30.303809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.490 [2024-12-15 13:08:30.303815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.303980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.303988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb410 is same with the state(6) to be set 00:28:22.491 [2024-12-15 13:08:30.304974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.304991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.491 [2024-12-15 13:08:30.305459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.491 [2024-12-15 13:08:30.305466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.305988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.305996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.306005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.306014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.306020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.306028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbe60 is same with the state(6) to be set 00:28:22.492 [2024-12-15 13:08:30.307002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.307017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.307030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.307038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.492 [2024-12-15 13:08:30.307047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.492 [2024-12-15 13:08:30.307054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.493 [2024-12-15 13:08:30.307686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.493 [2024-12-15 13:08:30.307693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.307991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.307998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.308006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.308013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.308022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.308029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.308037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd190 is same with the state(6) to be set 00:28:22.494 [2024-12-15 13:08:30.309020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.494 [2024-12-15 13:08:30.309227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.494 [2024-12-15 13:08:30.309235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.495 [2024-12-15 13:08:30.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.495 [2024-12-15 13:08:30.309749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.309960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.309967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce3a40 is same with the state(6) to be set 00:28:22.496 [2024-12-15 13:08:30.310927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.310939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.310950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.310957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.310966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.310973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.310982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.310988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.310997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.496 [2024-12-15 13:08:30.311315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.496 [2024-12-15 13:08:30.311322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.497 [2024-12-15 13:08:30.311940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.497 [2024-12-15 13:08:30.311949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.498 [2024-12-15 13:08:30.311955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.498 [2024-12-15 13:08:30.311963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4d70 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.312926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.312943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.312955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.312966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.313038] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.313055] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.313119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:22.498 task offset: 24576 on job bdev=Nvme4n1 fails 00:28:22.498 00:28:22.498 Latency(us) 00:28:22.498 [2024-12-15T12:08:30.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.498 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme1n1 ended in about 0.80 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme1n1 : 0.80 159.62 9.98 79.81 0.00 264348.69 48933.55 205720.62 00:28:22.498 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme2n1 ended in about 0.78 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme2n1 : 0.78 244.61 15.29 81.54 0.00 190092.86 4119.41 232684.01 00:28:22.498 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme3n1 ended in about 0.80 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme3n1 : 0.80 238.82 14.93 79.61 0.00 190988.07 13044.78 220700.28 00:28:22.498 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme4n1 ended in about 0.78 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme4n1 : 0.78 245.01 15.31 81.67 0.00 182050.13 4681.14 223696.21 00:28:22.498 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme5n1 ended in about 0.81 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme5n1 : 0.81 245.66 15.35 79.40 0.00 179542.91 12919.95 200727.41 00:28:22.498 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme6n1 ended in about 0.81 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme6n1 : 0.81 158.42 9.90 79.21 0.00 240548.33 36450.50 217704.35 00:28:22.498 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme7n1 ended in about 0.79 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme7n1 : 0.79 242.74 15.17 80.91 0.00 172270.81 11734.06 213709.78 00:28:22.498 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme8n1 ended in about 0.79 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme8n1 : 0.79 168.39 10.52 81.03 0.00 218502.12 16976.94 210713.84 00:28:22.498 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme9n1 ended in about 0.81 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme9n1 : 0.81 164.22 10.26 72.85 0.00 225092.10 17975.59 209715.20 00:28:22.498 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.498 Job: Nvme10n1 ended in about 0.81 seconds with error 00:28:22.498 Verification LBA range: start 0x0 length 0x400 00:28:22.498 Nvme10n1 : 0.81 157.65 9.85 78.83 0.00 221276.65 19473.55 232684.01 00:28:22.498 [2024-12-15T12:08:30.405Z] =================================================================================================================== 00:28:22.498 [2024-12-15T12:08:30.405Z] Total : 2025.13 126.57 794.85 0.00 204793.90 4119.41 232684.01 00:28:22.498 [2024-12-15 13:08:30.344109] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:22.498 [2024-12-15 13:08:30.344158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.344424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.344445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c6140 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.344465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c6140 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.344670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c4440 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.344691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c4440 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe96e0 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.344807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe96e0 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.344894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.344907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1300 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.344915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1300 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.346226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.346243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.346253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.346262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:22.498 [2024-12-15 13:08:30.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc23750 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.346496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23750 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.346693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.346706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc23970 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.346717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23970 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.346733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c6140 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.346748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c4440 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.346757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe96e0 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.346767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1300 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.346801] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.346814] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.346829] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.346841] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:22.498 [2024-12-15 13:08:30.347038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.347056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1610 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.347064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1610 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.347279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.347292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc335c0 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.347299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc335c0 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.347393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.347405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf1dc0 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.347412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1dc0 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.347559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.498 [2024-12-15 13:08:30.347571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c5cd0 with addr=10.0.0.2, port=4420 00:28:22.498 [2024-12-15 13:08:30.347580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c5cd0 is same with the state(6) to be set 00:28:22.498 [2024-12-15 13:08:30.347589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23750 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.347598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23970 (9): Bad file descriptor 00:28:22.498 [2024-12-15 13:08:30.347608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:22.498 [2024-12-15 13:08:30.347614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:22.498 [2024-12-15 13:08:30.347625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:22.498 [2024-12-15 13:08:30.347635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:22.498 [2024-12-15 13:08:30.347643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:22.498 [2024-12-15 13:08:30.347650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1610 (9): Bad file descriptor 00:28:22.499 [2024-12-15 13:08:30.347808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc335c0 (9): Bad file descriptor 00:28:22.499 [2024-12-15 13:08:30.347817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1dc0 (9): Bad file descriptor 00:28:22.499 [2024-12-15 13:08:30.347833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5cd0 (9): Bad file descriptor 00:28:22.499 [2024-12-15 13:08:30.347842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.347977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.347984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.347990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:22.499 [2024-12-15 13:08:30.347997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:22.499 [2024-12-15 13:08:30.348003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:22.499 [2024-12-15 13:08:30.348010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:22.499 [2024-12-15 13:08:30.348016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:22.758 13:08:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1094763 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1094763 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1094763 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.136 rmmod nvme_tcp 00:28:24.136 rmmod nvme_fabrics 00:28:24.136 rmmod nvme_keyring 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1094708 ']' 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1094708 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1094708 ']' 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1094708 00:28:24.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1094708) - No such process 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1094708 is not found' 00:28:24.136 Process with pid 1094708 is not found 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.136 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.137 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.137 13:08:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.044 00:28:26.044 real 0m7.076s 00:28:26.044 user 0m16.204s 00:28:26.044 sys 0m1.298s 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.044 ************************************ 00:28:26.044 END TEST nvmf_shutdown_tc3 00:28:26.044 ************************************ 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.044 ************************************ 00:28:26.044 START TEST nvmf_shutdown_tc4 00:28:26.044 ************************************ 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:26.044 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:26.044 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:26.044 Found net devices under 0000:af:00.0: cvl_0_0 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.044 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:26.045 Found net devices under 0000:af:00.1: cvl_0_1 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.045 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:28:26.303 00:28:26.303 --- 10.0.0.2 ping statistics --- 00:28:26.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.303 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:28:26.303 00:28:26.303 --- 10.0.0.1 ping statistics --- 00:28:26.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.303 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.303 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1096001 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1096001 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1096001 ']' 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.561 [2024-12-15 13:08:34.273689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:26.561 [2024-12-15 13:08:34.273738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.561 [2024-12-15 13:08:34.350945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.561 [2024-12-15 13:08:34.373859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.561 [2024-12-15 13:08:34.373912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.561 [2024-12-15 13:08:34.373920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.561 [2024-12-15 13:08:34.373926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.561 [2024-12-15 13:08:34.373931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.561 [2024-12-15 13:08:34.375280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.561 [2024-12-15 13:08:34.375371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.561 [2024-12-15 13:08:34.375474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.561 [2024-12-15 13:08:34.375475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.561 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.820 [2024-12-15 13:08:34.507535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.820 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:26.820 Malloc1 00:28:26.820 [2024-12-15 13:08:34.618303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.820 Malloc2 00:28:26.820 Malloc3 00:28:26.820 Malloc4 00:28:27.080 Malloc5 00:28:27.080 Malloc6 00:28:27.080 Malloc7 00:28:27.080 Malloc8 00:28:27.080 Malloc9 00:28:27.080 Malloc10 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1096058 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:27.340 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:27.340 [2024-12-15 13:08:35.125455] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1096001 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096001 ']' 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096001 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096001 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096001' 00:28:32.619 killing process with pid 1096001 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1096001 00:28:32.619 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1096001 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 [2024-12-15 13:08:40.122795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 [2024-12-15 13:08:40.123705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with Write completed with error (sct=0, sc=8) 00:28:32.619 the state(6) to be set 00:28:32.619 starting I/O failed: -6 00:28:32.619 [2024-12-15 13:08:40.123749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 [2024-12-15 13:08:40.123758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 [2024-12-15 13:08:40.123765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 [2024-12-15 13:08:40.123772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 starting I/O failed: -6 00:28:32.619 [2024-12-15 13:08:40.123779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 [2024-12-15 13:08:40.123786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 [2024-12-15 13:08:40.123792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 starting I/O failed: -6 00:28:32.619 [2024-12-15 13:08:40.123799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 [2024-12-15 13:08:40.123805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9af0 is same with the state(6) to be set 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.619 starting I/O failed: -6 00:28:32.619 Write completed with error (sct=0, sc=8) 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 [2024-12-15 13:08:40.124181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 [2024-12-15 13:08:40.124207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 [2024-12-15 13:08:40.124228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 starting I/O failed: -6 00:28:32.620 [2024-12-15 13:08:40.124234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9fc0 is same with the state(6) to be set 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 [2024-12-15 13:08:40.124540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9150 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9150 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.620 [2024-12-15 13:08:40.124571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9150 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9150 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.124586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9150 is same with the state(6) to be set 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 Write completed with error (sct=0, sc=8) 00:28:32.620 starting I/O failed: -6 00:28:32.620 [2024-12-15 13:08:40.126153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.620 NVMe io qpair process completion error 00:28:32.620 [2024-12-15 13:08:40.128108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97c0 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.620 [2024-12-15 13:08:40.128841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.128848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.128854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.128861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.128867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9c90 is same with the state(6) to be set 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.129677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e20 is same with Write completed with error (sct=0, sc=8) 00:28:32.621 the state(6) to be set 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.129701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8e20 is same with the state(6) to be set 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.129749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.621 starting I/O failed: -6 00:28:32.621 starting I/O failed: -6 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 [2024-12-15 13:08:40.130166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.130190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.130198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.130205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 [2024-12-15 13:08:40.130214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 [2024-12-15 13:08:40.130220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae6c70 is same with the state(6) to be set 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 [2024-12-15 13:08:40.130726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.621 Write completed with error (sct=0, sc=8) 00:28:32.621 starting I/O failed: -6 00:28:32.622 [2024-12-15 13:08:40.131708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 [2024-12-15 13:08:40.133232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.622 NVMe io qpair process completion error 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 [2024-12-15 13:08:40.134236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.622 starting I/O failed: -6 00:28:32.622 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 [2024-12-15 13:08:40.135068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 [2024-12-15 13:08:40.136065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.623 Write completed with error (sct=0, sc=8) 00:28:32.623 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 [2024-12-15 13:08:40.137819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.624 NVMe io qpair process completion error 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 [2024-12-15 13:08:40.138990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 [2024-12-15 13:08:40.139919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.624 Write completed with error (sct=0, sc=8) 00:28:32.624 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 [2024-12-15 13:08:40.140927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 [2024-12-15 13:08:40.142883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.625 NVMe io qpair process completion error 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 [2024-12-15 13:08:40.143842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 Write completed with error (sct=0, sc=8) 00:28:32.625 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 [2024-12-15 13:08:40.144687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 [2024-12-15 13:08:40.145749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.626 Write completed with error (sct=0, sc=8) 00:28:32.626 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 [2024-12-15 13:08:40.152876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.627 NVMe io qpair process completion error 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 [2024-12-15 13:08:40.153933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 [2024-12-15 13:08:40.154682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.627 Write completed with error (sct=0, sc=8) 00:28:32.627 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 [2024-12-15 13:08:40.155760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 [2024-12-15 13:08:40.157463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.628 NVMe io qpair process completion error 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 [2024-12-15 13:08:40.158522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.628 starting I/O failed: -6 00:28:32.628 starting I/O failed: -6 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.628 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 [2024-12-15 13:08:40.159508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 [2024-12-15 13:08:40.160929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.629 starting I/O failed: -6 00:28:32.629 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 [2024-12-15 13:08:40.162636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.630 NVMe io qpair process completion error 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 [2024-12-15 13:08:40.163682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 [2024-12-15 13:08:40.164667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.630 Write completed with error (sct=0, sc=8) 00:28:32.630 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 [2024-12-15 13:08:40.165712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 [2024-12-15 13:08:40.174732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.631 NVMe io qpair process completion error 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 Write completed with error (sct=0, sc=8) 00:28:32.631 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 [2024-12-15 13:08:40.175703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 [2024-12-15 13:08:40.176837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 [2024-12-15 13:08:40.178035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.632 Write completed with error (sct=0, sc=8) 00:28:32.632 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 [2024-12-15 13:08:40.180792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.633 NVMe io qpair process completion error 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.633 Write completed with error (sct=0, sc=8) 00:28:32.633 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Write completed with error (sct=0, sc=8) 00:28:32.634 starting I/O failed: -6 00:28:32.634 Initializing NVMe Controllers 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:32.634 Controller IO queue size 128, less than required. 00:28:32.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:32.635 Controller IO queue size 128, less than required. 00:28:32.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:32.635 Controller IO queue size 128, less than required. 00:28:32.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:32.635 Controller IO queue size 128, less than required. 00:28:32.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:32.635 Controller IO queue size 128, less than required. 00:28:32.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:32.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:32.635 Initialization complete. Launching workers. 00:28:32.635 ======================================================== 00:28:32.635 Latency(us) 00:28:32.635 Device Information : IOPS MiB/s Average min max 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2203.70 94.69 58089.99 648.40 94068.74 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2196.67 94.39 58286.38 727.97 128089.95 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2242.60 96.36 57109.12 512.53 127367.31 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2221.28 95.45 57672.40 881.97 126763.55 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2242.16 96.34 57216.02 757.87 126566.21 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2198.43 94.46 58371.88 744.95 113499.11 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2193.81 94.27 58514.60 980.33 116486.19 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2194.03 94.27 58618.73 1121.14 105173.07 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2221.28 95.45 57180.04 633.60 96617.25 00:28:32.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2185.46 93.91 58828.18 1057.67 127358.38 00:28:32.635 ======================================================== 00:28:32.635 Total : 22099.44 949.59 57983.55 512.53 128089.95 00:28:32.635 00:28:32.635 [2024-12-15 13:08:40.191098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70550 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70370 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d75b30 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71ff0 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72320 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70190 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72650 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71cc0 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ffb0 is same with the state(6) to be set 00:28:32.635 [2024-12-15 13:08:40.191383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d70880 is same with the state(6) to be set 00:28:32.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:32.635 13:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1096058 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1096058 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1096058 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.016 rmmod nvme_tcp 00:28:34.016 rmmod nvme_fabrics 00:28:34.016 rmmod nvme_keyring 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1096001 ']' 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1096001 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096001 ']' 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096001 00:28:34.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1096001) - No such process 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1096001 is not found' 00:28:34.016 Process with pid 1096001 is not found 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.016 13:08:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:35.922 00:28:35.922 real 0m9.756s 00:28:35.922 user 0m25.124s 00:28:35.922 sys 0m4.928s 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:35.922 ************************************ 00:28:35.922 END TEST nvmf_shutdown_tc4 00:28:35.922 ************************************ 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:35.922 00:28:35.922 real 0m39.474s 00:28:35.922 user 1m35.358s 00:28:35.922 sys 0m13.458s 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.922 ************************************ 00:28:35.922 END TEST nvmf_shutdown 00:28:35.922 ************************************ 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:35.922 ************************************ 00:28:35.922 START TEST nvmf_nsid 00:28:35.922 ************************************ 00:28:35.922 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:36.182 * Looking for test storage... 00:28:36.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.182 --rc genhtml_branch_coverage=1 00:28:36.182 --rc genhtml_function_coverage=1 00:28:36.182 --rc genhtml_legend=1 00:28:36.182 --rc geninfo_all_blocks=1 00:28:36.182 --rc geninfo_unexecuted_blocks=1 00:28:36.182 00:28:36.182 ' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.182 --rc genhtml_branch_coverage=1 00:28:36.182 --rc genhtml_function_coverage=1 00:28:36.182 --rc genhtml_legend=1 00:28:36.182 --rc geninfo_all_blocks=1 00:28:36.182 --rc geninfo_unexecuted_blocks=1 00:28:36.182 00:28:36.182 ' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.182 --rc genhtml_branch_coverage=1 00:28:36.182 --rc genhtml_function_coverage=1 00:28:36.182 --rc genhtml_legend=1 00:28:36.182 --rc geninfo_all_blocks=1 00:28:36.182 --rc geninfo_unexecuted_blocks=1 00:28:36.182 00:28:36.182 ' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.182 --rc genhtml_branch_coverage=1 00:28:36.182 --rc genhtml_function_coverage=1 00:28:36.182 --rc genhtml_legend=1 00:28:36.182 --rc geninfo_all_blocks=1 00:28:36.182 --rc geninfo_unexecuted_blocks=1 00:28:36.182 00:28:36.182 ' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.182 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:36.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.183 13:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.755 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:42.756 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:42.756 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:42.756 Found net devices under 0000:af:00.0: cvl_0_0 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:42.756 Found net devices under 0000:af:00.1: cvl_0_1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:28:42.756 00:28:42.756 --- 10.0.0.2 ping statistics --- 00:28:42.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.756 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:42.756 00:28:42.756 --- 10.0.0.1 ping statistics --- 00:28:42.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.756 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1100629 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1100629 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100629 ']' 00:28:42.756 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.757 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.757 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.757 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.757 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.757 [2024-12-15 13:08:49.916781] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:42.757 [2024-12-15 13:08:49.916834] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.757 [2024-12-15 13:08:49.995891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.757 [2024-12-15 13:08:50.019487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.757 [2024-12-15 13:08:50.019521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.757 [2024-12-15 13:08:50.019529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.757 [2024-12-15 13:08:50.019534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.757 [2024-12-15 13:08:50.019539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.757 [2024-12-15 13:08:50.019998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1100664 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=24f13df8-388a-48f5-a0be-1b6f640f11db 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=90a1da7b-7b93-4e0b-9ea6-609dd88e871e 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=846e3e8a-f315-4750-bd39-28ebf780541b 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.757 null0 00:28:42.757 null1 00:28:42.757 [2024-12-15 13:08:50.209553] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:42.757 [2024-12-15 13:08:50.209597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100664 ] 00:28:42.757 null2 00:28:42.757 [2024-12-15 13:08:50.216407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.757 [2024-12-15 13:08:50.240590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1100664 /var/tmp/tgt2.sock 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100664 ']' 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:42.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:42.757 [2024-12-15 13:08:50.283815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.757 [2024-12-15 13:08:50.305819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:42.757 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:43.016 [2024-12-15 13:08:50.823066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.016 [2024-12-15 13:08:50.839153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:43.016 nvme0n1 nvme0n2 00:28:43.016 nvme1n1 00:28:43.017 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:43.017 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:43.017 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:44.392 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:44.392 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:44.392 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:44.393 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 24f13df8-388a-48f5-a0be-1b6f640f11db 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:45.330 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=24f13df8388a48f5a0be1b6f640f11db 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 24F13DF8388A48F5A0BE1B6F640F11DB 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 24F13DF8388A48F5A0BE1B6F640F11DB == \2\4\F\1\3\D\F\8\3\8\8\A\4\8\F\5\A\0\B\E\1\B\6\F\6\4\0\F\1\1\D\B ]] 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 90a1da7b-7b93-4e0b-9ea6-609dd88e871e 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=90a1da7b7b934e0b9ea6609dd88e871e 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 90A1DA7B7B934E0B9EA6609DD88E871E 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 90A1DA7B7B934E0B9EA6609DD88E871E == \9\0\A\1\D\A\7\B\7\B\9\3\4\E\0\B\9\E\A\6\6\0\9\D\D\8\8\E\8\7\1\E ]] 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 846e3e8a-f315-4750-bd39-28ebf780541b 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=846e3e8af3154750bd3928ebf780541b 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 846E3E8AF3154750BD3928EBF780541B 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 846E3E8AF3154750BD3928EBF780541B == \8\4\6\E\3\E\8\A\F\3\1\5\4\7\5\0\B\D\3\9\2\8\E\B\F\7\8\0\5\4\1\B ]] 00:28:45.330 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1100664 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100664 ']' 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100664 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100664 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100664' 00:28:45.589 killing process with pid 1100664 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100664 00:28:45.589 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100664 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.848 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.848 rmmod nvme_tcp 00:28:45.848 rmmod nvme_fabrics 00:28:46.107 rmmod nvme_keyring 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1100629 ']' 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100629 ']' 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100629' 00:28:46.107 killing process with pid 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100629 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.107 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.107 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.107 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.107 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.107 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.107 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.643 00:28:48.643 real 0m12.301s 00:28:48.643 user 0m9.590s 00:28:48.643 sys 0m5.457s 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:48.643 ************************************ 00:28:48.643 END TEST nvmf_nsid 00:28:48.643 ************************************ 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:48.643 00:28:48.643 real 18m28.604s 00:28:48.643 user 48m47.875s 00:28:48.643 sys 4m43.358s 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.643 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:48.643 ************************************ 00:28:48.643 END TEST nvmf_target_extra 00:28:48.643 ************************************ 00:28:48.643 13:08:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:48.643 13:08:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.643 13:08:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.643 13:08:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:48.643 ************************************ 00:28:48.643 START TEST nvmf_host 00:28:48.643 ************************************ 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:48.643 * Looking for test storage... 00:28:48.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.643 --rc genhtml_branch_coverage=1 00:28:48.643 --rc genhtml_function_coverage=1 00:28:48.643 --rc genhtml_legend=1 00:28:48.643 --rc geninfo_all_blocks=1 00:28:48.643 --rc geninfo_unexecuted_blocks=1 00:28:48.643 00:28:48.643 ' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.643 --rc genhtml_branch_coverage=1 00:28:48.643 --rc genhtml_function_coverage=1 00:28:48.643 --rc genhtml_legend=1 00:28:48.643 --rc geninfo_all_blocks=1 00:28:48.643 --rc geninfo_unexecuted_blocks=1 00:28:48.643 00:28:48.643 ' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.643 --rc genhtml_branch_coverage=1 00:28:48.643 --rc genhtml_function_coverage=1 00:28:48.643 --rc genhtml_legend=1 00:28:48.643 --rc geninfo_all_blocks=1 00:28:48.643 --rc geninfo_unexecuted_blocks=1 00:28:48.643 00:28:48.643 ' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.643 --rc genhtml_branch_coverage=1 00:28:48.643 --rc genhtml_function_coverage=1 00:28:48.643 --rc genhtml_legend=1 00:28:48.643 --rc geninfo_all_blocks=1 00:28:48.643 --rc geninfo_unexecuted_blocks=1 00:28:48.643 00:28:48.643 ' 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.643 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.644 ************************************ 00:28:48.644 START TEST nvmf_multicontroller 00:28:48.644 ************************************ 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:48.644 * Looking for test storage... 00:28:48.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.644 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.904 --rc genhtml_branch_coverage=1 00:28:48.904 --rc genhtml_function_coverage=1 00:28:48.904 --rc genhtml_legend=1 00:28:48.904 --rc geninfo_all_blocks=1 00:28:48.904 --rc geninfo_unexecuted_blocks=1 00:28:48.904 00:28:48.904 ' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.904 --rc genhtml_branch_coverage=1 00:28:48.904 --rc genhtml_function_coverage=1 00:28:48.904 --rc genhtml_legend=1 00:28:48.904 --rc geninfo_all_blocks=1 00:28:48.904 --rc geninfo_unexecuted_blocks=1 00:28:48.904 00:28:48.904 ' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.904 --rc genhtml_branch_coverage=1 00:28:48.904 --rc genhtml_function_coverage=1 00:28:48.904 --rc genhtml_legend=1 00:28:48.904 --rc geninfo_all_blocks=1 00:28:48.904 --rc geninfo_unexecuted_blocks=1 00:28:48.904 00:28:48.904 ' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.904 --rc genhtml_branch_coverage=1 00:28:48.904 --rc genhtml_function_coverage=1 00:28:48.904 --rc genhtml_legend=1 00:28:48.904 --rc geninfo_all_blocks=1 00:28:48.904 --rc geninfo_unexecuted_blocks=1 00:28:48.904 00:28:48.904 ' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.904 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.905 13:08:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:55.476 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:55.476 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:55.476 Found net devices under 0000:af:00.0: cvl_0_0 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:55.476 Found net devices under 0000:af:00.1: cvl_0_1 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.476 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:28:55.477 00:28:55.477 --- 10.0.0.2 ping statistics --- 00:28:55.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.477 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:28:55.477 00:28:55.477 --- 10.0.0.1 ping statistics --- 00:28:55.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.477 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1104988 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1104988 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1104988 ']' 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 [2024-12-15 13:09:02.708110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:55.477 [2024-12-15 13:09:02.708165] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.477 [2024-12-15 13:09:02.791456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.477 [2024-12-15 13:09:02.815272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.477 [2024-12-15 13:09:02.815304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.477 [2024-12-15 13:09:02.815312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.477 [2024-12-15 13:09:02.815318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.477 [2024-12-15 13:09:02.815323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.477 [2024-12-15 13:09:02.816492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.477 [2024-12-15 13:09:02.816598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.477 [2024-12-15 13:09:02.816600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 [2024-12-15 13:09:02.955900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 Malloc0 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 [2024-12-15 13:09:03.027273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 [2024-12-15 13:09:03.039235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.477 Malloc1 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.477 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1105054 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1105054 /var/tmp/bdevperf.sock 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105054 ']' 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.478 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.737 NVMe0n1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.738 1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.738 request: 00:28:55.738 { 00:28:55.738 "name": "NVMe0", 00:28:55.738 "trtype": "tcp", 00:28:55.738 "traddr": "10.0.0.2", 00:28:55.738 "adrfam": "ipv4", 00:28:55.738 "trsvcid": "4420", 00:28:55.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.738 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:55.738 "hostaddr": "10.0.0.1", 00:28:55.738 "prchk_reftag": false, 00:28:55.738 "prchk_guard": false, 00:28:55.738 "hdgst": false, 00:28:55.738 "ddgst": false, 00:28:55.738 "allow_unrecognized_csi": false, 00:28:55.738 "method": "bdev_nvme_attach_controller", 00:28:55.738 "req_id": 1 00:28:55.738 } 00:28:55.738 Got JSON-RPC error response 00:28:55.738 response: 00:28:55.738 { 00:28:55.738 "code": -114, 00:28:55.738 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.738 } 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.738 request: 00:28:55.738 { 00:28:55.738 "name": "NVMe0", 00:28:55.738 "trtype": "tcp", 00:28:55.738 "traddr": "10.0.0.2", 00:28:55.738 "adrfam": "ipv4", 00:28:55.738 "trsvcid": "4420", 00:28:55.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.738 "hostaddr": "10.0.0.1", 00:28:55.738 "prchk_reftag": false, 00:28:55.738 "prchk_guard": false, 00:28:55.738 "hdgst": false, 00:28:55.738 "ddgst": false, 00:28:55.738 "allow_unrecognized_csi": false, 00:28:55.738 "method": "bdev_nvme_attach_controller", 00:28:55.738 "req_id": 1 00:28:55.738 } 00:28:55.738 Got JSON-RPC error response 00:28:55.738 response: 00:28:55.738 { 00:28:55.738 "code": -114, 00:28:55.738 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.738 } 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.738 request: 00:28:55.738 { 00:28:55.738 "name": "NVMe0", 00:28:55.738 "trtype": "tcp", 00:28:55.738 "traddr": "10.0.0.2", 00:28:55.738 "adrfam": "ipv4", 00:28:55.738 "trsvcid": "4420", 00:28:55.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.738 "hostaddr": "10.0.0.1", 00:28:55.738 "prchk_reftag": false, 00:28:55.738 "prchk_guard": false, 00:28:55.738 "hdgst": false, 00:28:55.738 "ddgst": false, 00:28:55.738 "multipath": "disable", 00:28:55.738 "allow_unrecognized_csi": false, 00:28:55.738 "method": "bdev_nvme_attach_controller", 00:28:55.738 "req_id": 1 00:28:55.738 } 00:28:55.738 Got JSON-RPC error response 00:28:55.738 response: 00:28:55.738 { 00:28:55.738 "code": -114, 00:28:55.738 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:55.738 } 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.738 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.739 request: 00:28:55.739 { 00:28:55.739 "name": "NVMe0", 00:28:55.739 "trtype": "tcp", 00:28:55.739 "traddr": "10.0.0.2", 00:28:55.739 "adrfam": "ipv4", 00:28:55.739 "trsvcid": "4420", 00:28:55.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.739 "hostaddr": "10.0.0.1", 00:28:55.739 "prchk_reftag": false, 00:28:55.739 "prchk_guard": false, 00:28:55.739 "hdgst": false, 00:28:55.739 "ddgst": false, 00:28:55.739 "multipath": "failover", 00:28:55.739 "allow_unrecognized_csi": false, 00:28:55.739 "method": "bdev_nvme_attach_controller", 00:28:55.739 "req_id": 1 00:28:55.739 } 00:28:55.739 Got JSON-RPC error response 00:28:55.739 response: 00:28:55.739 { 00:28:55.739 "code": -114, 00:28:55.739 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:55.739 } 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.739 NVMe0n1 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.739 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.998 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:55.998 13:09:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:57.376 { 00:28:57.376 "results": [ 00:28:57.376 { 00:28:57.376 "job": "NVMe0n1", 00:28:57.376 "core_mask": "0x1", 00:28:57.376 "workload": "write", 00:28:57.376 "status": "finished", 00:28:57.376 "queue_depth": 128, 00:28:57.376 "io_size": 4096, 00:28:57.376 "runtime": 1.002765, 00:28:57.376 "iops": 25208.299053118128, 00:28:57.376 "mibps": 98.46991817624269, 00:28:57.376 "io_failed": 0, 00:28:57.376 "io_timeout": 0, 00:28:57.376 "avg_latency_us": 5071.391799984176, 00:28:57.376 "min_latency_us": 3105.158095238095, 00:28:57.376 "max_latency_us": 10048.853333333333 00:28:57.376 } 00:28:57.376 ], 00:28:57.376 "core_count": 1 00:28:57.376 } 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1105054 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105054 ']' 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105054 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.376 13:09:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105054 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105054' 00:28:57.376 killing process with pid 1105054 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105054 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105054 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:28:57.376 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:57.376 [2024-12-15 13:09:03.140102] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:57.376 [2024-12-15 13:09:03.140154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105054 ] 00:28:57.376 [2024-12-15 13:09:03.214327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.376 [2024-12-15 13:09:03.236973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.376 [2024-12-15 13:09:03.832129] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name eaef4be1-8757-4733-b0f5-2486cc7811ac already exists 00:28:57.376 [2024-12-15 13:09:03.832157] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:eaef4be1-8757-4733-b0f5-2486cc7811ac alias for bdev NVMe1n1 00:28:57.376 [2024-12-15 13:09:03.832165] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:57.376 Running I/O for 1 seconds... 00:28:57.376 25150.00 IOPS, 98.24 MiB/s 00:28:57.376 Latency(us) 00:28:57.376 [2024-12-15T12:09:05.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.376 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:57.376 NVMe0n1 : 1.00 25208.30 98.47 0.00 0.00 5071.39 3105.16 10048.85 00:28:57.376 [2024-12-15T12:09:05.283Z] =================================================================================================================== 00:28:57.376 [2024-12-15T12:09:05.283Z] Total : 25208.30 98.47 0.00 0.00 5071.39 3105.16 10048.85 00:28:57.376 Received shutdown signal, test time was about 1.000000 seconds 00:28:57.376 00:28:57.376 Latency(us) 00:28:57.376 [2024-12-15T12:09:05.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.376 [2024-12-15T12:09:05.283Z] =================================================================================================================== 00:28:57.376 [2024-12-15T12:09:05.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.376 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.376 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.376 rmmod nvme_tcp 00:28:57.376 rmmod nvme_fabrics 00:28:57.634 rmmod nvme_keyring 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1104988 ']' 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1104988 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1104988 ']' 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1104988 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1104988 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1104988' 00:28:57.634 killing process with pid 1104988 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1104988 00:28:57.634 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1104988 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.893 13:09:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.797 00:28:59.797 real 0m11.197s 00:28:59.797 user 0m12.129s 00:28:59.797 sys 0m5.160s 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:59.797 ************************************ 00:28:59.797 END TEST nvmf_multicontroller 00:28:59.797 ************************************ 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.797 13:09:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.797 ************************************ 00:28:59.797 START TEST nvmf_aer 00:29:00.056 ************************************ 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:00.056 * Looking for test storage... 00:29:00.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:00.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.056 --rc genhtml_branch_coverage=1 00:29:00.056 --rc genhtml_function_coverage=1 00:29:00.056 --rc genhtml_legend=1 00:29:00.056 --rc geninfo_all_blocks=1 00:29:00.056 --rc geninfo_unexecuted_blocks=1 00:29:00.056 00:29:00.056 ' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:00.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.056 --rc genhtml_branch_coverage=1 00:29:00.056 --rc genhtml_function_coverage=1 00:29:00.056 --rc genhtml_legend=1 00:29:00.056 --rc geninfo_all_blocks=1 00:29:00.056 --rc geninfo_unexecuted_blocks=1 00:29:00.056 00:29:00.056 ' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:00.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.056 --rc genhtml_branch_coverage=1 00:29:00.056 --rc genhtml_function_coverage=1 00:29:00.056 --rc genhtml_legend=1 00:29:00.056 --rc geninfo_all_blocks=1 00:29:00.056 --rc geninfo_unexecuted_blocks=1 00:29:00.056 00:29:00.056 ' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:00.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.056 --rc genhtml_branch_coverage=1 00:29:00.056 --rc genhtml_function_coverage=1 00:29:00.056 --rc genhtml_legend=1 00:29:00.056 --rc geninfo_all_blocks=1 00:29:00.056 --rc geninfo_unexecuted_blocks=1 00:29:00.056 00:29:00.056 ' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.056 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.057 13:09:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:06.705 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:06.705 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:06.705 Found net devices under 0000:af:00.0: cvl_0_0 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:06.705 Found net devices under 0000:af:00.1: cvl_0_1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.705 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:29:06.706 00:29:06.706 --- 10.0.0.2 ping statistics --- 00:29:06.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.706 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:06.706 00:29:06.706 --- 10.0.0.1 ping statistics --- 00:29:06.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.706 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1109170 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1109170 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1109170 ']' 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.706 13:09:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 [2024-12-15 13:09:13.845550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:06.706 [2024-12-15 13:09:13.845604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.706 [2024-12-15 13:09:13.926810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.706 [2024-12-15 13:09:13.950578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.706 [2024-12-15 13:09:13.950615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.706 [2024-12-15 13:09:13.950622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.706 [2024-12-15 13:09:13.950628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.706 [2024-12-15 13:09:13.950633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.706 [2024-12-15 13:09:13.952065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.706 [2024-12-15 13:09:13.952176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.706 [2024-12-15 13:09:13.952284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.706 [2024-12-15 13:09:13.952285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 [2024-12-15 13:09:14.079781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 Malloc0 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 [2024-12-15 13:09:14.143413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.706 [ 00:29:06.706 { 00:29:06.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:06.706 "subtype": "Discovery", 00:29:06.706 "listen_addresses": [], 00:29:06.706 "allow_any_host": true, 00:29:06.706 "hosts": [] 00:29:06.706 }, 00:29:06.706 { 00:29:06.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.706 "subtype": "NVMe", 00:29:06.706 "listen_addresses": [ 00:29:06.706 { 00:29:06.706 "trtype": "TCP", 00:29:06.706 "adrfam": "IPv4", 00:29:06.706 "traddr": "10.0.0.2", 00:29:06.706 "trsvcid": "4420" 00:29:06.706 } 00:29:06.706 ], 00:29:06.706 "allow_any_host": true, 00:29:06.706 "hosts": [], 00:29:06.706 "serial_number": "SPDK00000000000001", 00:29:06.706 "model_number": "SPDK bdev Controller", 00:29:06.706 "max_namespaces": 2, 00:29:06.706 "min_cntlid": 1, 00:29:06.706 "max_cntlid": 65519, 00:29:06.706 "namespaces": [ 00:29:06.706 { 00:29:06.706 "nsid": 1, 00:29:06.706 "bdev_name": "Malloc0", 00:29:06.706 "name": "Malloc0", 00:29:06.706 "nguid": "3985B4E7FDA54D1FAB2DF4942756E5F9", 00:29:06.706 "uuid": "3985b4e7-fda5-4d1f-ab2d-f4942756e5f9" 00:29:06.706 } 00:29:06.706 ] 00:29:06.706 } 00:29:06.706 ] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1109393 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:06.706 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 Malloc1 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 [ 00:29:06.707 { 00:29:06.707 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:06.707 "subtype": "Discovery", 00:29:06.707 "listen_addresses": [], 00:29:06.707 "allow_any_host": true, 00:29:06.707 "hosts": [] 00:29:06.707 }, 00:29:06.707 { 00:29:06.707 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.707 "subtype": "NVMe", 00:29:06.707 "listen_addresses": [ 00:29:06.707 { 00:29:06.707 "trtype": "TCP", 00:29:06.707 "adrfam": "IPv4", 00:29:06.707 "traddr": "10.0.0.2", 00:29:06.707 "trsvcid": "4420" 00:29:06.707 } 00:29:06.707 ], 00:29:06.707 "allow_any_host": true, 00:29:06.707 "hosts": [], 00:29:06.707 "serial_number": "SPDK00000000000001", 00:29:06.707 "model_number": "SPDK bdev Controller", 00:29:06.707 "max_namespaces": 2, 00:29:06.707 "min_cntlid": 1, 00:29:06.707 "max_cntlid": 65519, 00:29:06.707 "namespaces": [ 00:29:06.707 { 00:29:06.707 "nsid": 1, 00:29:06.707 "bdev_name": "Malloc0", 00:29:06.707 "name": "Malloc0", 00:29:06.707 "nguid": "3985B4E7FDA54D1FAB2DF4942756E5F9", 00:29:06.707 "uuid": "3985b4e7-fda5-4d1f-ab2d-f4942756e5f9" 00:29:06.707 }, 00:29:06.707 { 00:29:06.707 "nsid": 2, 00:29:06.707 "bdev_name": "Malloc1", 00:29:06.707 "name": "Malloc1", 00:29:06.707 Asynchronous Event Request test 00:29:06.707 Attaching to 10.0.0.2 00:29:06.707 Attached to 10.0.0.2 00:29:06.707 Registering asynchronous event callbacks... 00:29:06.707 Starting namespace attribute notice tests for all controllers... 00:29:06.707 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:06.707 aer_cb - Changed Namespace 00:29:06.707 Cleaning up... 00:29:06.707 "nguid": "C5173E1CB9DF48859BCB4AA19FF103CA", 00:29:06.707 "uuid": "c5173e1c-b9df-4885-9bcb-4aa19ff103ca" 00:29:06.707 } 00:29:06.707 ] 00:29:06.707 } 00:29:06.707 ] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1109393 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.707 rmmod nvme_tcp 00:29:06.707 rmmod nvme_fabrics 00:29:06.707 rmmod nvme_keyring 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1109170 ']' 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1109170 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1109170 ']' 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1109170 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.707 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109170 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109170' 00:29:06.965 killing process with pid 1109170 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1109170 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1109170 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.965 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.966 13:09:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.501 00:29:09.501 real 0m9.161s 00:29:09.501 user 0m5.070s 00:29:09.501 sys 0m4.835s 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.501 ************************************ 00:29:09.501 END TEST nvmf_aer 00:29:09.501 ************************************ 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.501 ************************************ 00:29:09.501 START TEST nvmf_async_init 00:29:09.501 ************************************ 00:29:09.501 13:09:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:09.502 * Looking for test storage... 00:29:09.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:09.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.502 --rc genhtml_branch_coverage=1 00:29:09.502 --rc genhtml_function_coverage=1 00:29:09.502 --rc genhtml_legend=1 00:29:09.502 --rc geninfo_all_blocks=1 00:29:09.502 --rc geninfo_unexecuted_blocks=1 00:29:09.502 00:29:09.502 ' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:09.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.502 --rc genhtml_branch_coverage=1 00:29:09.502 --rc genhtml_function_coverage=1 00:29:09.502 --rc genhtml_legend=1 00:29:09.502 --rc geninfo_all_blocks=1 00:29:09.502 --rc geninfo_unexecuted_blocks=1 00:29:09.502 00:29:09.502 ' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:09.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.502 --rc genhtml_branch_coverage=1 00:29:09.502 --rc genhtml_function_coverage=1 00:29:09.502 --rc genhtml_legend=1 00:29:09.502 --rc geninfo_all_blocks=1 00:29:09.502 --rc geninfo_unexecuted_blocks=1 00:29:09.502 00:29:09.502 ' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:09.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.502 --rc genhtml_branch_coverage=1 00:29:09.502 --rc genhtml_function_coverage=1 00:29:09.502 --rc genhtml_legend=1 00:29:09.502 --rc geninfo_all_blocks=1 00:29:09.502 --rc geninfo_unexecuted_blocks=1 00:29:09.502 00:29:09.502 ' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:09.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a8d0e27cf8c34517980eca0c2a998f8f 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:09.502 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.503 13:09:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:16.077 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:16.077 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.077 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:16.078 Found net devices under 0000:af:00.0: cvl_0_0 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:16.078 Found net devices under 0000:af:00.1: cvl_0_1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.078 13:09:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:29:16.078 00:29:16.078 --- 10.0.0.2 ping statistics --- 00:29:16.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.078 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:16.078 00:29:16.078 --- 10.0.0.1 ping statistics --- 00:29:16.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.078 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1112854 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1112854 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1112854 ']' 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 [2024-12-15 13:09:23.124476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:16.078 [2024-12-15 13:09:23.124519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.078 [2024-12-15 13:09:23.199946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.078 [2024-12-15 13:09:23.221773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.078 [2024-12-15 13:09:23.221806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.078 [2024-12-15 13:09:23.221813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.078 [2024-12-15 13:09:23.221819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.078 [2024-12-15 13:09:23.221845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.078 [2024-12-15 13:09:23.222350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 [2024-12-15 13:09:23.361275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 null0 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.078 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a8d0e27cf8c34517980eca0c2a998f8f 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 [2024-12-15 13:09:23.413537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 nvme0n1 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 [ 00:29:16.079 { 00:29:16.079 "name": "nvme0n1", 00:29:16.079 "aliases": [ 00:29:16.079 "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f" 00:29:16.079 ], 00:29:16.079 "product_name": "NVMe disk", 00:29:16.079 "block_size": 512, 00:29:16.079 "num_blocks": 2097152, 00:29:16.079 "uuid": "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f", 00:29:16.079 "numa_id": 1, 00:29:16.079 "assigned_rate_limits": { 00:29:16.079 "rw_ios_per_sec": 0, 00:29:16.079 "rw_mbytes_per_sec": 0, 00:29:16.079 "r_mbytes_per_sec": 0, 00:29:16.079 "w_mbytes_per_sec": 0 00:29:16.079 }, 00:29:16.079 "claimed": false, 00:29:16.079 "zoned": false, 00:29:16.079 "supported_io_types": { 00:29:16.079 "read": true, 00:29:16.079 "write": true, 00:29:16.079 "unmap": false, 00:29:16.079 "flush": true, 00:29:16.079 "reset": true, 00:29:16.079 "nvme_admin": true, 00:29:16.079 "nvme_io": true, 00:29:16.079 "nvme_io_md": false, 00:29:16.079 "write_zeroes": true, 00:29:16.079 "zcopy": false, 00:29:16.079 "get_zone_info": false, 00:29:16.079 "zone_management": false, 00:29:16.079 "zone_append": false, 00:29:16.079 "compare": true, 00:29:16.079 "compare_and_write": true, 00:29:16.079 "abort": true, 00:29:16.079 "seek_hole": false, 00:29:16.079 "seek_data": false, 00:29:16.079 "copy": true, 00:29:16.079 "nvme_iov_md": false 00:29:16.079 }, 00:29:16.079 "memory_domains": [ 00:29:16.079 { 00:29:16.079 "dma_device_id": "system", 00:29:16.079 "dma_device_type": 1 00:29:16.079 } 00:29:16.079 ], 00:29:16.079 "driver_specific": { 00:29:16.079 "nvme": [ 00:29:16.079 { 00:29:16.079 "trid": { 00:29:16.079 "trtype": "TCP", 00:29:16.079 "adrfam": "IPv4", 00:29:16.079 "traddr": "10.0.0.2", 00:29:16.079 "trsvcid": "4420", 00:29:16.079 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.079 }, 00:29:16.079 "ctrlr_data": { 00:29:16.079 "cntlid": 1, 00:29:16.079 "vendor_id": "0x8086", 00:29:16.079 "model_number": "SPDK bdev Controller", 00:29:16.079 "serial_number": "00000000000000000000", 00:29:16.079 "firmware_revision": "25.01", 00:29:16.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.079 "oacs": { 00:29:16.079 "security": 0, 00:29:16.079 "format": 0, 00:29:16.079 "firmware": 0, 00:29:16.079 "ns_manage": 0 00:29:16.079 }, 00:29:16.079 "multi_ctrlr": true, 00:29:16.079 "ana_reporting": false 00:29:16.079 }, 00:29:16.079 "vs": { 00:29:16.079 "nvme_version": "1.3" 00:29:16.079 }, 00:29:16.079 "ns_data": { 00:29:16.079 "id": 1, 00:29:16.079 "can_share": true 00:29:16.079 } 00:29:16.079 } 00:29:16.079 ], 00:29:16.079 "mp_policy": "active_passive" 00:29:16.079 } 00:29:16.079 } 00:29:16.079 ] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 [2024-12-15 13:09:23.682098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:16.079 [2024-12-15 13:09:23.682158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92ca90 (9): Bad file descriptor 00:29:16.079 [2024-12-15 13:09:23.815907] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 [ 00:29:16.079 { 00:29:16.079 "name": "nvme0n1", 00:29:16.079 "aliases": [ 00:29:16.079 "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f" 00:29:16.079 ], 00:29:16.079 "product_name": "NVMe disk", 00:29:16.079 "block_size": 512, 00:29:16.079 "num_blocks": 2097152, 00:29:16.079 "uuid": "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f", 00:29:16.079 "numa_id": 1, 00:29:16.079 "assigned_rate_limits": { 00:29:16.079 "rw_ios_per_sec": 0, 00:29:16.079 "rw_mbytes_per_sec": 0, 00:29:16.079 "r_mbytes_per_sec": 0, 00:29:16.079 "w_mbytes_per_sec": 0 00:29:16.079 }, 00:29:16.079 "claimed": false, 00:29:16.079 "zoned": false, 00:29:16.079 "supported_io_types": { 00:29:16.079 "read": true, 00:29:16.079 "write": true, 00:29:16.079 "unmap": false, 00:29:16.079 "flush": true, 00:29:16.079 "reset": true, 00:29:16.079 "nvme_admin": true, 00:29:16.079 "nvme_io": true, 00:29:16.079 "nvme_io_md": false, 00:29:16.079 "write_zeroes": true, 00:29:16.079 "zcopy": false, 00:29:16.079 "get_zone_info": false, 00:29:16.079 "zone_management": false, 00:29:16.079 "zone_append": false, 00:29:16.079 "compare": true, 00:29:16.079 "compare_and_write": true, 00:29:16.079 "abort": true, 00:29:16.079 "seek_hole": false, 00:29:16.079 "seek_data": false, 00:29:16.079 "copy": true, 00:29:16.079 "nvme_iov_md": false 00:29:16.079 }, 00:29:16.079 "memory_domains": [ 00:29:16.079 { 00:29:16.079 "dma_device_id": "system", 00:29:16.079 "dma_device_type": 1 00:29:16.079 } 00:29:16.079 ], 00:29:16.079 "driver_specific": { 00:29:16.079 "nvme": [ 00:29:16.079 { 00:29:16.079 "trid": { 00:29:16.079 "trtype": "TCP", 00:29:16.079 "adrfam": "IPv4", 00:29:16.079 "traddr": "10.0.0.2", 00:29:16.079 "trsvcid": "4420", 00:29:16.079 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.079 }, 00:29:16.079 "ctrlr_data": { 00:29:16.079 "cntlid": 2, 00:29:16.079 "vendor_id": "0x8086", 00:29:16.079 "model_number": "SPDK bdev Controller", 00:29:16.079 "serial_number": "00000000000000000000", 00:29:16.079 "firmware_revision": "25.01", 00:29:16.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.079 "oacs": { 00:29:16.079 "security": 0, 00:29:16.079 "format": 0, 00:29:16.079 "firmware": 0, 00:29:16.079 "ns_manage": 0 00:29:16.079 }, 00:29:16.079 "multi_ctrlr": true, 00:29:16.079 "ana_reporting": false 00:29:16.079 }, 00:29:16.079 "vs": { 00:29:16.079 "nvme_version": "1.3" 00:29:16.079 }, 00:29:16.079 "ns_data": { 00:29:16.079 "id": 1, 00:29:16.079 "can_share": true 00:29:16.079 } 00:29:16.079 } 00:29:16.079 ], 00:29:16.079 "mp_policy": "active_passive" 00:29:16.079 } 00:29:16.079 } 00:29:16.079 ] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sKmoLibTUu 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sKmoLibTUu 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.sKmoLibTUu 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.079 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.080 [2024-12-15 13:09:23.890716] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:16.080 [2024-12-15 13:09:23.890812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.080 [2024-12-15 13:09:23.910781] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.080 nvme0n1 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.080 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.339 [ 00:29:16.339 { 00:29:16.339 "name": "nvme0n1", 00:29:16.339 "aliases": [ 00:29:16.339 "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f" 00:29:16.339 ], 00:29:16.339 "product_name": "NVMe disk", 00:29:16.339 "block_size": 512, 00:29:16.339 "num_blocks": 2097152, 00:29:16.339 "uuid": "a8d0e27c-f8c3-4517-980e-ca0c2a998f8f", 00:29:16.339 "numa_id": 1, 00:29:16.339 "assigned_rate_limits": { 00:29:16.339 "rw_ios_per_sec": 0, 00:29:16.339 "rw_mbytes_per_sec": 0, 00:29:16.339 "r_mbytes_per_sec": 0, 00:29:16.339 "w_mbytes_per_sec": 0 00:29:16.339 }, 00:29:16.339 "claimed": false, 00:29:16.339 "zoned": false, 00:29:16.339 "supported_io_types": { 00:29:16.339 "read": true, 00:29:16.339 "write": true, 00:29:16.339 "unmap": false, 00:29:16.339 "flush": true, 00:29:16.339 "reset": true, 00:29:16.339 "nvme_admin": true, 00:29:16.339 "nvme_io": true, 00:29:16.339 "nvme_io_md": false, 00:29:16.339 "write_zeroes": true, 00:29:16.339 "zcopy": false, 00:29:16.339 "get_zone_info": false, 00:29:16.339 "zone_management": false, 00:29:16.339 "zone_append": false, 00:29:16.339 "compare": true, 00:29:16.339 "compare_and_write": true, 00:29:16.339 "abort": true, 00:29:16.339 "seek_hole": false, 00:29:16.339 "seek_data": false, 00:29:16.339 "copy": true, 00:29:16.339 "nvme_iov_md": false 00:29:16.339 }, 00:29:16.339 "memory_domains": [ 00:29:16.339 { 00:29:16.339 "dma_device_id": "system", 00:29:16.339 "dma_device_type": 1 00:29:16.339 } 00:29:16.339 ], 00:29:16.339 "driver_specific": { 00:29:16.339 "nvme": [ 00:29:16.339 { 00:29:16.339 "trid": { 00:29:16.339 "trtype": "TCP", 00:29:16.339 "adrfam": "IPv4", 00:29:16.339 "traddr": "10.0.0.2", 00:29:16.339 "trsvcid": "4421", 00:29:16.339 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.339 }, 00:29:16.339 "ctrlr_data": { 00:29:16.339 "cntlid": 3, 00:29:16.339 "vendor_id": "0x8086", 00:29:16.339 "model_number": "SPDK bdev Controller", 00:29:16.339 "serial_number": "00000000000000000000", 00:29:16.339 "firmware_revision": "25.01", 00:29:16.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.339 "oacs": { 00:29:16.339 "security": 0, 00:29:16.339 "format": 0, 00:29:16.339 "firmware": 0, 00:29:16.339 "ns_manage": 0 00:29:16.339 }, 00:29:16.339 "multi_ctrlr": true, 00:29:16.339 "ana_reporting": false 00:29:16.339 }, 00:29:16.339 "vs": { 00:29:16.339 "nvme_version": "1.3" 00:29:16.339 }, 00:29:16.339 "ns_data": { 00:29:16.339 "id": 1, 00:29:16.339 "can_share": true 00:29:16.339 } 00:29:16.339 } 00:29:16.339 ], 00:29:16.339 "mp_policy": "active_passive" 00:29:16.339 } 00:29:16.339 } 00:29:16.339 ] 00:29:16.339 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.340 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.340 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.340 13:09:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.sKmoLibTUu 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.340 rmmod nvme_tcp 00:29:16.340 rmmod nvme_fabrics 00:29:16.340 rmmod nvme_keyring 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1112854 ']' 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1112854 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1112854 ']' 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1112854 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112854 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112854' 00:29:16.340 killing process with pid 1112854 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1112854 00:29:16.340 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1112854 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.599 13:09:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.506 00:29:18.506 real 0m9.427s 00:29:18.506 user 0m3.007s 00:29:18.506 sys 0m4.829s 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.506 ************************************ 00:29:18.506 END TEST nvmf_async_init 00:29:18.506 ************************************ 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.506 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.766 ************************************ 00:29:18.766 START TEST dma 00:29:18.766 ************************************ 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:18.766 * Looking for test storage... 00:29:18.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.766 --rc genhtml_branch_coverage=1 00:29:18.766 --rc genhtml_function_coverage=1 00:29:18.766 --rc genhtml_legend=1 00:29:18.766 --rc geninfo_all_blocks=1 00:29:18.766 --rc geninfo_unexecuted_blocks=1 00:29:18.766 00:29:18.766 ' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.766 --rc genhtml_branch_coverage=1 00:29:18.766 --rc genhtml_function_coverage=1 00:29:18.766 --rc genhtml_legend=1 00:29:18.766 --rc geninfo_all_blocks=1 00:29:18.766 --rc geninfo_unexecuted_blocks=1 00:29:18.766 00:29:18.766 ' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.766 --rc genhtml_branch_coverage=1 00:29:18.766 --rc genhtml_function_coverage=1 00:29:18.766 --rc genhtml_legend=1 00:29:18.766 --rc geninfo_all_blocks=1 00:29:18.766 --rc geninfo_unexecuted_blocks=1 00:29:18.766 00:29:18.766 ' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.766 --rc genhtml_branch_coverage=1 00:29:18.766 --rc genhtml_function_coverage=1 00:29:18.766 --rc genhtml_legend=1 00:29:18.766 --rc geninfo_all_blocks=1 00:29:18.766 --rc geninfo_unexecuted_blocks=1 00:29:18.766 00:29:18.766 ' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.766 13:09:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:18.767 00:29:18.767 real 0m0.214s 00:29:18.767 user 0m0.128s 00:29:18.767 sys 0m0.100s 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.767 13:09:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:18.767 ************************************ 00:29:18.767 END TEST dma 00:29:18.767 ************************************ 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.026 ************************************ 00:29:19.026 START TEST nvmf_identify 00:29:19.026 ************************************ 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.026 * Looking for test storage... 00:29:19.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.026 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.027 --rc genhtml_branch_coverage=1 00:29:19.027 --rc genhtml_function_coverage=1 00:29:19.027 --rc genhtml_legend=1 00:29:19.027 --rc geninfo_all_blocks=1 00:29:19.027 --rc geninfo_unexecuted_blocks=1 00:29:19.027 00:29:19.027 ' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.027 --rc genhtml_branch_coverage=1 00:29:19.027 --rc genhtml_function_coverage=1 00:29:19.027 --rc genhtml_legend=1 00:29:19.027 --rc geninfo_all_blocks=1 00:29:19.027 --rc geninfo_unexecuted_blocks=1 00:29:19.027 00:29:19.027 ' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.027 --rc genhtml_branch_coverage=1 00:29:19.027 --rc genhtml_function_coverage=1 00:29:19.027 --rc genhtml_legend=1 00:29:19.027 --rc geninfo_all_blocks=1 00:29:19.027 --rc geninfo_unexecuted_blocks=1 00:29:19.027 00:29:19.027 ' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.027 --rc genhtml_branch_coverage=1 00:29:19.027 --rc genhtml_function_coverage=1 00:29:19.027 --rc genhtml_legend=1 00:29:19.027 --rc geninfo_all_blocks=1 00:29:19.027 --rc geninfo_unexecuted_blocks=1 00:29:19.027 00:29:19.027 ' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.027 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.287 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.287 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.287 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.287 13:09:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:25.863 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.863 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:25.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:25.864 Found net devices under 0000:af:00.0: cvl_0_0 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:25.864 Found net devices under 0000:af:00.1: cvl_0_1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:29:25.864 00:29:25.864 --- 10.0.0.2 ping statistics --- 00:29:25.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.864 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:29:25.864 00:29:25.864 --- 10.0.0.1 ping statistics --- 00:29:25.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.864 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1116606 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1116606 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1116606 ']' 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.864 13:09:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.864 [2024-12-15 13:09:32.844681] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:25.864 [2024-12-15 13:09:32.844722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.864 [2024-12-15 13:09:32.924498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.864 [2024-12-15 13:09:32.948695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.864 [2024-12-15 13:09:32.948735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.864 [2024-12-15 13:09:32.948743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.864 [2024-12-15 13:09:32.948750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.864 [2024-12-15 13:09:32.948756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.864 [2024-12-15 13:09:32.950228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.864 [2024-12-15 13:09:32.950340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.864 [2024-12-15 13:09:32.950447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.864 [2024-12-15 13:09:32.950448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.864 [2024-12-15 13:09:33.043180] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.864 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 Malloc0 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 [2024-12-15 13:09:33.150947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.865 [ 00:29:25.865 { 00:29:25.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:25.865 "subtype": "Discovery", 00:29:25.865 "listen_addresses": [ 00:29:25.865 { 00:29:25.865 "trtype": "TCP", 00:29:25.865 "adrfam": "IPv4", 00:29:25.865 "traddr": "10.0.0.2", 00:29:25.865 "trsvcid": "4420" 00:29:25.865 } 00:29:25.865 ], 00:29:25.865 "allow_any_host": true, 00:29:25.865 "hosts": [] 00:29:25.865 }, 00:29:25.865 { 00:29:25.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.865 "subtype": "NVMe", 00:29:25.865 "listen_addresses": [ 00:29:25.865 { 00:29:25.865 "trtype": "TCP", 00:29:25.865 "adrfam": "IPv4", 00:29:25.865 "traddr": "10.0.0.2", 00:29:25.865 "trsvcid": "4420" 00:29:25.865 } 00:29:25.865 ], 00:29:25.865 "allow_any_host": true, 00:29:25.865 "hosts": [], 00:29:25.865 "serial_number": "SPDK00000000000001", 00:29:25.865 "model_number": "SPDK bdev Controller", 00:29:25.865 "max_namespaces": 32, 00:29:25.865 "min_cntlid": 1, 00:29:25.865 "max_cntlid": 65519, 00:29:25.865 "namespaces": [ 00:29:25.865 { 00:29:25.865 "nsid": 1, 00:29:25.865 "bdev_name": "Malloc0", 00:29:25.865 "name": "Malloc0", 00:29:25.865 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:25.865 "eui64": "ABCDEF0123456789", 00:29:25.865 "uuid": "7b0cd545-13ba-480a-9930-e2f1ea3e03e9" 00:29:25.865 } 00:29:25.865 ] 00:29:25.865 } 00:29:25.865 ] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.865 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:25.865 [2024-12-15 13:09:33.206701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:25.865 [2024-12-15 13:09:33.206743] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116632 ] 00:29:25.865 [2024-12-15 13:09:33.246320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:25.865 [2024-12-15 13:09:33.246372] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:25.865 [2024-12-15 13:09:33.246377] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:25.865 [2024-12-15 13:09:33.246387] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:25.865 [2024-12-15 13:09:33.246395] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:25.865 [2024-12-15 13:09:33.250059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:25.865 [2024-12-15 13:09:33.250093] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ec3ed0 0 00:29:25.865 [2024-12-15 13:09:33.256862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:25.865 [2024-12-15 13:09:33.256876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:25.865 [2024-12-15 13:09:33.256881] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:25.865 [2024-12-15 13:09:33.256884] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:25.865 [2024-12-15 13:09:33.256912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.256918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.256922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.865 [2024-12-15 13:09:33.256934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:25.865 [2024-12-15 13:09:33.256948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.865 [2024-12-15 13:09:33.264837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.865 [2024-12-15 13:09:33.264845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.865 [2024-12-15 13:09:33.264849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.264853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.865 [2024-12-15 13:09:33.264865] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:25.865 [2024-12-15 13:09:33.264871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:25.865 [2024-12-15 13:09:33.264878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:25.865 [2024-12-15 13:09:33.264889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.264892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.264896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.865 [2024-12-15 13:09:33.264902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.865 [2024-12-15 13:09:33.264914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.865 [2024-12-15 13:09:33.265083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.865 [2024-12-15 13:09:33.265089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.865 [2024-12-15 13:09:33.265092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.865 [2024-12-15 13:09:33.265100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:25.865 [2024-12-15 13:09:33.265106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:25.865 [2024-12-15 13:09:33.265113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.865 [2024-12-15 13:09:33.265124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.865 [2024-12-15 13:09:33.265134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.865 [2024-12-15 13:09:33.265199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.865 [2024-12-15 13:09:33.265204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.865 [2024-12-15 13:09:33.265207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.865 [2024-12-15 13:09:33.265215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:25.865 [2024-12-15 13:09:33.265222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:25.865 [2024-12-15 13:09:33.265228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.865 [2024-12-15 13:09:33.265240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.865 [2024-12-15 13:09:33.265249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.865 [2024-12-15 13:09:33.265309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.865 [2024-12-15 13:09:33.265315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.865 [2024-12-15 13:09:33.265318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.865 [2024-12-15 13:09:33.265325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:25.865 [2024-12-15 13:09:33.265335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.865 [2024-12-15 13:09:33.265342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.865 [2024-12-15 13:09:33.265347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.865 [2024-12-15 13:09:33.265356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.865 [2024-12-15 13:09:33.265420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.865 [2024-12-15 13:09:33.265425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.265428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.265435] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:25.866 [2024-12-15 13:09:33.265440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:25.866 [2024-12-15 13:09:33.265447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:25.866 [2024-12-15 13:09:33.265554] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:25.866 [2024-12-15 13:09:33.265558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:25.866 [2024-12-15 13:09:33.265566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.265578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.866 [2024-12-15 13:09:33.265587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.866 [2024-12-15 13:09:33.265648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.866 [2024-12-15 13:09:33.265653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.265656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.265664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:25.866 [2024-12-15 13:09:33.265672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.265684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.866 [2024-12-15 13:09:33.265693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.866 [2024-12-15 13:09:33.265754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.866 [2024-12-15 13:09:33.265759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.265762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.265771] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:25.866 [2024-12-15 13:09:33.265776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.265783] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:25.866 [2024-12-15 13:09:33.265790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.265798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.265806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.866 [2024-12-15 13:09:33.265816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.866 [2024-12-15 13:09:33.265899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.866 [2024-12-15 13:09:33.265905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.866 [2024-12-15 13:09:33.265908] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265911] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec3ed0): datao=0, datal=4096, cccid=0 00:29:25.866 [2024-12-15 13:09:33.265916] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2f540) on tqpair(0x1ec3ed0): expected_datao=0, payload_size=4096 00:29:25.866 [2024-12-15 13:09:33.265920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265931] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.866 [2024-12-15 13:09:33.265974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.265977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.265980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.265987] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:25.866 [2024-12-15 13:09:33.265991] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:25.866 [2024-12-15 13:09:33.265995] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:25.866 [2024-12-15 13:09:33.266000] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:25.866 [2024-12-15 13:09:33.266004] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:25.866 [2024-12-15 13:09:33.266008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.266017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.266027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:25.866 [2024-12-15 13:09:33.266052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.866 [2024-12-15 13:09:33.266114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.866 [2024-12-15 13:09:33.266120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.266123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.266133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.866 [2024-12-15 13:09:33.266149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.866 [2024-12-15 13:09:33.266165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.866 [2024-12-15 13:09:33.266181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.866 [2024-12-15 13:09:33.266196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.266206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:25.866 [2024-12-15 13:09:33.266212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.866 [2024-12-15 13:09:33.266232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f540, cid 0, qid 0 00:29:25.866 [2024-12-15 13:09:33.266236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f6c0, cid 1, qid 0 00:29:25.866 [2024-12-15 13:09:33.266240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f840, cid 2, qid 0 00:29:25.866 [2024-12-15 13:09:33.266244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.866 [2024-12-15 13:09:33.266248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fb40, cid 4, qid 0 00:29:25.866 [2024-12-15 13:09:33.266340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.866 [2024-12-15 13:09:33.266345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.866 [2024-12-15 13:09:33.266348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fb40) on tqpair=0x1ec3ed0 00:29:25.866 [2024-12-15 13:09:33.266358] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:25.866 [2024-12-15 13:09:33.266362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:25.866 [2024-12-15 13:09:33.266371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.866 [2024-12-15 13:09:33.266374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec3ed0) 00:29:25.866 [2024-12-15 13:09:33.266380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.866 [2024-12-15 13:09:33.266389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fb40, cid 4, qid 0 00:29:25.867 [2024-12-15 13:09:33.266461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.867 [2024-12-15 13:09:33.266467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.867 [2024-12-15 13:09:33.266470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.266473] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec3ed0): datao=0, datal=4096, cccid=4 00:29:25.867 [2024-12-15 13:09:33.266477] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2fb40) on tqpair(0x1ec3ed0): expected_datao=0, payload_size=4096 00:29:25.867 [2024-12-15 13:09:33.266481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.266490] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.266494] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.309831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.867 [2024-12-15 13:09:33.309844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.867 [2024-12-15 13:09:33.309848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.309851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fb40) on tqpair=0x1ec3ed0 00:29:25.867 [2024-12-15 13:09:33.309864] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:25.867 [2024-12-15 13:09:33.309889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.309893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec3ed0) 00:29:25.867 [2024-12-15 13:09:33.309900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.867 [2024-12-15 13:09:33.309906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.309910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.309913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec3ed0) 00:29:25.867 [2024-12-15 13:09:33.309919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.867 [2024-12-15 13:09:33.309933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fb40, cid 4, qid 0 00:29:25.867 [2024-12-15 13:09:33.309938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fcc0, cid 5, qid 0 00:29:25.867 [2024-12-15 13:09:33.310125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.867 [2024-12-15 13:09:33.310131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.867 [2024-12-15 13:09:33.310134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.310137] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec3ed0): datao=0, datal=1024, cccid=4 00:29:25.867 [2024-12-15 13:09:33.310141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2fb40) on tqpair(0x1ec3ed0): expected_datao=0, payload_size=1024 00:29:25.867 [2024-12-15 13:09:33.310147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.310153] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.310156] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.310161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.867 [2024-12-15 13:09:33.310165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.867 [2024-12-15 13:09:33.310168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.310171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fcc0) on tqpair=0x1ec3ed0 00:29:25.867 [2024-12-15 13:09:33.350965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.867 [2024-12-15 13:09:33.350974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.867 [2024-12-15 13:09:33.350977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.350981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fb40) on tqpair=0x1ec3ed0 00:29:25.867 [2024-12-15 13:09:33.350991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.350994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec3ed0) 00:29:25.867 [2024-12-15 13:09:33.351001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.867 [2024-12-15 13:09:33.351015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fb40, cid 4, qid 0 00:29:25.867 [2024-12-15 13:09:33.351091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.867 [2024-12-15 13:09:33.351097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.867 [2024-12-15 13:09:33.351100] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351103] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec3ed0): datao=0, datal=3072, cccid=4 00:29:25.867 [2024-12-15 13:09:33.351107] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2fb40) on tqpair(0x1ec3ed0): expected_datao=0, payload_size=3072 00:29:25.867 [2024-12-15 13:09:33.351111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351117] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351120] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.867 [2024-12-15 13:09:33.351159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.867 [2024-12-15 13:09:33.351162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fb40) on tqpair=0x1ec3ed0 00:29:25.867 [2024-12-15 13:09:33.351172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec3ed0) 00:29:25.867 [2024-12-15 13:09:33.351181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.867 [2024-12-15 13:09:33.351195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2fb40, cid 4, qid 0 00:29:25.867 [2024-12-15 13:09:33.351270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.867 [2024-12-15 13:09:33.351275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.867 [2024-12-15 13:09:33.351278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351281] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec3ed0): datao=0, datal=8, cccid=4 00:29:25.867 [2024-12-15 13:09:33.351285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f2fb40) on tqpair(0x1ec3ed0): expected_datao=0, payload_size=8 00:29:25.867 [2024-12-15 13:09:33.351289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351297] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.351300] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.395833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.867 [2024-12-15 13:09:33.395842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.867 [2024-12-15 13:09:33.395845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.867 [2024-12-15 13:09:33.395849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2fb40) on tqpair=0x1ec3ed0 00:29:25.867 ===================================================== 00:29:25.867 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:25.867 ===================================================== 00:29:25.867 Controller Capabilities/Features 00:29:25.867 ================================ 00:29:25.867 Vendor ID: 0000 00:29:25.867 Subsystem Vendor ID: 0000 00:29:25.867 Serial Number: .................... 00:29:25.867 Model Number: ........................................ 00:29:25.867 Firmware Version: 25.01 00:29:25.867 Recommended Arb Burst: 0 00:29:25.867 IEEE OUI Identifier: 00 00 00 00:29:25.867 Multi-path I/O 00:29:25.867 May have multiple subsystem ports: No 00:29:25.867 May have multiple controllers: No 00:29:25.867 Associated with SR-IOV VF: No 00:29:25.867 Max Data Transfer Size: 131072 00:29:25.867 Max Number of Namespaces: 0 00:29:25.867 Max Number of I/O Queues: 1024 00:29:25.867 NVMe Specification Version (VS): 1.3 00:29:25.867 NVMe Specification Version (Identify): 1.3 00:29:25.867 Maximum Queue Entries: 128 00:29:25.867 Contiguous Queues Required: Yes 00:29:25.867 Arbitration Mechanisms Supported 00:29:25.867 Weighted Round Robin: Not Supported 00:29:25.867 Vendor Specific: Not Supported 00:29:25.867 Reset Timeout: 15000 ms 00:29:25.867 Doorbell Stride: 4 bytes 00:29:25.867 NVM Subsystem Reset: Not Supported 00:29:25.867 Command Sets Supported 00:29:25.867 NVM Command Set: Supported 00:29:25.867 Boot Partition: Not Supported 00:29:25.867 Memory Page Size Minimum: 4096 bytes 00:29:25.867 Memory Page Size Maximum: 4096 bytes 00:29:25.867 Persistent Memory Region: Not Supported 00:29:25.867 Optional Asynchronous Events Supported 00:29:25.867 Namespace Attribute Notices: Not Supported 00:29:25.867 Firmware Activation Notices: Not Supported 00:29:25.867 ANA Change Notices: Not Supported 00:29:25.867 PLE Aggregate Log Change Notices: Not Supported 00:29:25.867 LBA Status Info Alert Notices: Not Supported 00:29:25.867 EGE Aggregate Log Change Notices: Not Supported 00:29:25.867 Normal NVM Subsystem Shutdown event: Not Supported 00:29:25.867 Zone Descriptor Change Notices: Not Supported 00:29:25.867 Discovery Log Change Notices: Supported 00:29:25.867 Controller Attributes 00:29:25.867 128-bit Host Identifier: Not Supported 00:29:25.867 Non-Operational Permissive Mode: Not Supported 00:29:25.867 NVM Sets: Not Supported 00:29:25.867 Read Recovery Levels: Not Supported 00:29:25.867 Endurance Groups: Not Supported 00:29:25.867 Predictable Latency Mode: Not Supported 00:29:25.867 Traffic Based Keep ALive: Not Supported 00:29:25.867 Namespace Granularity: Not Supported 00:29:25.867 SQ Associations: Not Supported 00:29:25.867 UUID List: Not Supported 00:29:25.867 Multi-Domain Subsystem: Not Supported 00:29:25.867 Fixed Capacity Management: Not Supported 00:29:25.867 Variable Capacity Management: Not Supported 00:29:25.867 Delete Endurance Group: Not Supported 00:29:25.867 Delete NVM Set: Not Supported 00:29:25.867 Extended LBA Formats Supported: Not Supported 00:29:25.867 Flexible Data Placement Supported: Not Supported 00:29:25.867 00:29:25.867 Controller Memory Buffer Support 00:29:25.867 ================================ 00:29:25.867 Supported: No 00:29:25.867 00:29:25.867 Persistent Memory Region Support 00:29:25.867 ================================ 00:29:25.867 Supported: No 00:29:25.867 00:29:25.868 Admin Command Set Attributes 00:29:25.868 ============================ 00:29:25.868 Security Send/Receive: Not Supported 00:29:25.868 Format NVM: Not Supported 00:29:25.868 Firmware Activate/Download: Not Supported 00:29:25.868 Namespace Management: Not Supported 00:29:25.868 Device Self-Test: Not Supported 00:29:25.868 Directives: Not Supported 00:29:25.868 NVMe-MI: Not Supported 00:29:25.868 Virtualization Management: Not Supported 00:29:25.868 Doorbell Buffer Config: Not Supported 00:29:25.868 Get LBA Status Capability: Not Supported 00:29:25.868 Command & Feature Lockdown Capability: Not Supported 00:29:25.868 Abort Command Limit: 1 00:29:25.868 Async Event Request Limit: 4 00:29:25.868 Number of Firmware Slots: N/A 00:29:25.868 Firmware Slot 1 Read-Only: N/A 00:29:25.868 Firmware Activation Without Reset: N/A 00:29:25.868 Multiple Update Detection Support: N/A 00:29:25.868 Firmware Update Granularity: No Information Provided 00:29:25.868 Per-Namespace SMART Log: No 00:29:25.868 Asymmetric Namespace Access Log Page: Not Supported 00:29:25.868 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:25.868 Command Effects Log Page: Not Supported 00:29:25.868 Get Log Page Extended Data: Supported 00:29:25.868 Telemetry Log Pages: Not Supported 00:29:25.868 Persistent Event Log Pages: Not Supported 00:29:25.868 Supported Log Pages Log Page: May Support 00:29:25.868 Commands Supported & Effects Log Page: Not Supported 00:29:25.868 Feature Identifiers & Effects Log Page:May Support 00:29:25.868 NVMe-MI Commands & Effects Log Page: May Support 00:29:25.868 Data Area 4 for Telemetry Log: Not Supported 00:29:25.868 Error Log Page Entries Supported: 128 00:29:25.868 Keep Alive: Not Supported 00:29:25.868 00:29:25.868 NVM Command Set Attributes 00:29:25.868 ========================== 00:29:25.868 Submission Queue Entry Size 00:29:25.868 Max: 1 00:29:25.868 Min: 1 00:29:25.868 Completion Queue Entry Size 00:29:25.868 Max: 1 00:29:25.868 Min: 1 00:29:25.868 Number of Namespaces: 0 00:29:25.868 Compare Command: Not Supported 00:29:25.868 Write Uncorrectable Command: Not Supported 00:29:25.868 Dataset Management Command: Not Supported 00:29:25.868 Write Zeroes Command: Not Supported 00:29:25.868 Set Features Save Field: Not Supported 00:29:25.868 Reservations: Not Supported 00:29:25.868 Timestamp: Not Supported 00:29:25.868 Copy: Not Supported 00:29:25.868 Volatile Write Cache: Not Present 00:29:25.868 Atomic Write Unit (Normal): 1 00:29:25.868 Atomic Write Unit (PFail): 1 00:29:25.868 Atomic Compare & Write Unit: 1 00:29:25.868 Fused Compare & Write: Supported 00:29:25.868 Scatter-Gather List 00:29:25.868 SGL Command Set: Supported 00:29:25.868 SGL Keyed: Supported 00:29:25.868 SGL Bit Bucket Descriptor: Not Supported 00:29:25.868 SGL Metadata Pointer: Not Supported 00:29:25.868 Oversized SGL: Not Supported 00:29:25.868 SGL Metadata Address: Not Supported 00:29:25.868 SGL Offset: Supported 00:29:25.868 Transport SGL Data Block: Not Supported 00:29:25.868 Replay Protected Memory Block: Not Supported 00:29:25.868 00:29:25.868 Firmware Slot Information 00:29:25.868 ========================= 00:29:25.868 Active slot: 0 00:29:25.868 00:29:25.868 00:29:25.868 Error Log 00:29:25.868 ========= 00:29:25.868 00:29:25.868 Active Namespaces 00:29:25.868 ================= 00:29:25.868 Discovery Log Page 00:29:25.868 ================== 00:29:25.868 Generation Counter: 2 00:29:25.868 Number of Records: 2 00:29:25.868 Record Format: 0 00:29:25.868 00:29:25.868 Discovery Log Entry 0 00:29:25.868 ---------------------- 00:29:25.868 Transport Type: 3 (TCP) 00:29:25.868 Address Family: 1 (IPv4) 00:29:25.868 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:25.868 Entry Flags: 00:29:25.868 Duplicate Returned Information: 1 00:29:25.868 Explicit Persistent Connection Support for Discovery: 1 00:29:25.868 Transport Requirements: 00:29:25.868 Secure Channel: Not Required 00:29:25.868 Port ID: 0 (0x0000) 00:29:25.868 Controller ID: 65535 (0xffff) 00:29:25.868 Admin Max SQ Size: 128 00:29:25.868 Transport Service Identifier: 4420 00:29:25.868 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:25.868 Transport Address: 10.0.0.2 00:29:25.868 Discovery Log Entry 1 00:29:25.868 ---------------------- 00:29:25.868 Transport Type: 3 (TCP) 00:29:25.868 Address Family: 1 (IPv4) 00:29:25.868 Subsystem Type: 2 (NVM Subsystem) 00:29:25.868 Entry Flags: 00:29:25.868 Duplicate Returned Information: 0 00:29:25.868 Explicit Persistent Connection Support for Discovery: 0 00:29:25.868 Transport Requirements: 00:29:25.868 Secure Channel: Not Required 00:29:25.868 Port ID: 0 (0x0000) 00:29:25.868 Controller ID: 65535 (0xffff) 00:29:25.868 Admin Max SQ Size: 128 00:29:25.868 Transport Service Identifier: 4420 00:29:25.868 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:25.868 Transport Address: 10.0.0.2 [2024-12-15 13:09:33.395931] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:25.868 [2024-12-15 13:09:33.395942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f540) on tqpair=0x1ec3ed0 00:29:25.868 [2024-12-15 13:09:33.395948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.868 [2024-12-15 13:09:33.395953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f6c0) on tqpair=0x1ec3ed0 00:29:25.868 [2024-12-15 13:09:33.395957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.868 [2024-12-15 13:09:33.395961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f840) on tqpair=0x1ec3ed0 00:29:25.868 [2024-12-15 13:09:33.395965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.868 [2024-12-15 13:09:33.395969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.868 [2024-12-15 13:09:33.395973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.868 [2024-12-15 13:09:33.395980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.395983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.395986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.868 [2024-12-15 13:09:33.395993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.868 [2024-12-15 13:09:33.396006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.868 [2024-12-15 13:09:33.396070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.868 [2024-12-15 13:09:33.396075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.868 [2024-12-15 13:09:33.396078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.396081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.868 [2024-12-15 13:09:33.396087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.396090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.396093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.868 [2024-12-15 13:09:33.396099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.868 [2024-12-15 13:09:33.396111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.868 [2024-12-15 13:09:33.396178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.868 [2024-12-15 13:09:33.396183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.868 [2024-12-15 13:09:33.396186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.868 [2024-12-15 13:09:33.396190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396194] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:25.869 [2024-12-15 13:09:33.396198] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:25.869 [2024-12-15 13:09:33.396207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.396924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.396929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.396932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.396943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.396949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.396955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.396963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.397042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.397047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.397050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.397062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.397075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.397084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.397144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.397149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.397152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.397163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.397175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.397183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.397244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.397250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.397253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.397263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.397270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.397275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.397284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.400831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.400839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.400842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.400845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.869 [2024-12-15 13:09:33.400855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.400858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.869 [2024-12-15 13:09:33.400861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec3ed0) 00:29:25.869 [2024-12-15 13:09:33.400867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.869 [2024-12-15 13:09:33.400877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f2f9c0, cid 3, qid 0 00:29:25.869 [2024-12-15 13:09:33.401028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.869 [2024-12-15 13:09:33.401034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.869 [2024-12-15 13:09:33.401037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.401040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f2f9c0) on tqpair=0x1ec3ed0 00:29:25.870 [2024-12-15 13:09:33.401046] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:29:25.870 00:29:25.870 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:25.870 [2024-12-15 13:09:33.436644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:25.870 [2024-12-15 13:09:33.436676] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116634 ] 00:29:25.870 [2024-12-15 13:09:33.474986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:25.870 [2024-12-15 13:09:33.475022] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:25.870 [2024-12-15 13:09:33.475027] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:25.870 [2024-12-15 13:09:33.475037] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:25.870 [2024-12-15 13:09:33.475044] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:25.870 [2024-12-15 13:09:33.478972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:25.870 [2024-12-15 13:09:33.478999] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13b7ed0 0 00:29:25.870 [2024-12-15 13:09:33.485835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:25.870 [2024-12-15 13:09:33.485848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:25.870 [2024-12-15 13:09:33.485852] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:25.870 [2024-12-15 13:09:33.485855] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:25.870 [2024-12-15 13:09:33.485879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.485885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.485888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.485898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:25.870 [2024-12-15 13:09:33.485914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.493835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.493843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.493846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.493850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.493861] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:25.870 [2024-12-15 13:09:33.493866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:25.870 [2024-12-15 13:09:33.493871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:25.870 [2024-12-15 13:09:33.493881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.493884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.493887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.493894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.493907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:25.870 [2024-12-15 13:09:33.494080] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:25.870 [2024-12-15 13:09:33.494086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.494099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.494109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:25.870 [2024-12-15 13:09:33.494195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.494213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.494223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.494321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.494331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494413] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:25.870 [2024-12-15 13:09:33.494417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494531] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:25.870 [2024-12-15 13:09:33.494535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.494553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.494563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:25.870 [2024-12-15 13:09:33.494648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.870 [2024-12-15 13:09:33.494660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.870 [2024-12-15 13:09:33.494669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.870 [2024-12-15 13:09:33.494735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.870 [2024-12-15 13:09:33.494740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.870 [2024-12-15 13:09:33.494743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.870 [2024-12-15 13:09:33.494750] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:25.870 [2024-12-15 13:09:33.494755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:25.870 [2024-12-15 13:09:33.494761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:25.870 [2024-12-15 13:09:33.494768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:25.870 [2024-12-15 13:09:33.494775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.870 [2024-12-15 13:09:33.494779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.494786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.871 [2024-12-15 13:09:33.494795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.871 [2024-12-15 13:09:33.494909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.871 [2024-12-15 13:09:33.494915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.871 [2024-12-15 13:09:33.494918] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.494921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=4096, cccid=0 00:29:25.871 [2024-12-15 13:09:33.494925] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423540) on tqpair(0x13b7ed0): expected_datao=0, payload_size=4096 00:29:25.871 [2024-12-15 13:09:33.494929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.494940] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.494944] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.494966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.871 [2024-12-15 13:09:33.494971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.871 [2024-12-15 13:09:33.494974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.494977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.871 [2024-12-15 13:09:33.494983] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:25.871 [2024-12-15 13:09:33.494987] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:25.871 [2024-12-15 13:09:33.494991] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:25.871 [2024-12-15 13:09:33.494995] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:25.871 [2024-12-15 13:09:33.494999] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:25.871 [2024-12-15 13:09:33.495003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:25.871 [2024-12-15 13:09:33.495044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.871 [2024-12-15 13:09:33.495106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.871 [2024-12-15 13:09:33.495112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.871 [2024-12-15 13:09:33.495115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.871 [2024-12-15 13:09:33.495124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.871 [2024-12-15 13:09:33.495142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.871 [2024-12-15 13:09:33.495158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.871 [2024-12-15 13:09:33.495174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.871 [2024-12-15 13:09:33.495189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.871 [2024-12-15 13:09:33.495224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423540, cid 0, qid 0 00:29:25.871 [2024-12-15 13:09:33.495229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14236c0, cid 1, qid 0 00:29:25.871 [2024-12-15 13:09:33.495233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423840, cid 2, qid 0 00:29:25.871 [2024-12-15 13:09:33.495237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.871 [2024-12-15 13:09:33.495241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.871 [2024-12-15 13:09:33.495339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.871 [2024-12-15 13:09:33.495345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.871 [2024-12-15 13:09:33.495348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.871 [2024-12-15 13:09:33.495355] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:25.871 [2024-12-15 13:09:33.495359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:25.871 [2024-12-15 13:09:33.495404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.871 [2024-12-15 13:09:33.495467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.871 [2024-12-15 13:09:33.495473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.871 [2024-12-15 13:09:33.495476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.871 [2024-12-15 13:09:33.495527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.495543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.495552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.871 [2024-12-15 13:09:33.495561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.871 [2024-12-15 13:09:33.495633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.871 [2024-12-15 13:09:33.495639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.871 [2024-12-15 13:09:33.495642] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495645] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=4096, cccid=4 00:29:25.871 [2024-12-15 13:09:33.495649] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423b40) on tqpair(0x13b7ed0): expected_datao=0, payload_size=4096 00:29:25.871 [2024-12-15 13:09:33.495653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495663] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.495666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.535962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.871 [2024-12-15 13:09:33.535975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.871 [2024-12-15 13:09:33.535978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.535982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.871 [2024-12-15 13:09:33.535994] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:25.871 [2024-12-15 13:09:33.536006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.536015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:25.871 [2024-12-15 13:09:33.536022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.871 [2024-12-15 13:09:33.536025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.871 [2024-12-15 13:09:33.536032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.871 [2024-12-15 13:09:33.536044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.871 [2024-12-15 13:09:33.536132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.871 [2024-12-15 13:09:33.536138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.871 [2024-12-15 13:09:33.536143] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.536147] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=4096, cccid=4 00:29:25.872 [2024-12-15 13:09:33.536150] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423b40) on tqpair(0x13b7ed0): expected_datao=0, payload_size=4096 00:29:25.872 [2024-12-15 13:09:33.536154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.536160] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.536163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.576940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.576949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.576952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.576955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.576968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.576978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.576985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.576988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.576994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.872 [2024-12-15 13:09:33.577074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.872 [2024-12-15 13:09:33.577080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.872 [2024-12-15 13:09:33.577083] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577086] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=4096, cccid=4 00:29:25.872 [2024-12-15 13:09:33.577089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423b40) on tqpair(0x13b7ed0): expected_datao=0, payload_size=4096 00:29:25.872 [2024-12-15 13:09:33.577093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577099] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577167] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:25.872 [2024-12-15 13:09:33.577171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:25.872 [2024-12-15 13:09:33.577176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:25.872 [2024-12-15 13:09:33.577189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.872 [2024-12-15 13:09:33.577228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.872 [2024-12-15 13:09:33.577233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423cc0, cid 5, qid 0 00:29:25.872 [2024-12-15 13:09:33.577308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423cc0) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423cc0, cid 5, qid 0 00:29:25.872 [2024-12-15 13:09:33.577426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423cc0) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423cc0, cid 5, qid 0 00:29:25.872 [2024-12-15 13:09:33.577525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423cc0) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423cc0, cid 5, qid 0 00:29:25.872 [2024-12-15 13:09:33.577626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.872 [2024-12-15 13:09:33.577632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.872 [2024-12-15 13:09:33.577635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423cc0) on tqpair=0x13b7ed0 00:29:25.872 [2024-12-15 13:09:33.577649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.872 [2024-12-15 13:09:33.577694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.872 [2024-12-15 13:09:33.577697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13b7ed0) 00:29:25.872 [2024-12-15 13:09:33.577702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.873 [2024-12-15 13:09:33.577713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423cc0, cid 5, qid 0 00:29:25.873 [2024-12-15 13:09:33.577717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423b40, cid 4, qid 0 00:29:25.873 [2024-12-15 13:09:33.577721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423e40, cid 6, qid 0 00:29:25.873 [2024-12-15 13:09:33.577725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423fc0, cid 7, qid 0 00:29:25.873 [2024-12-15 13:09:33.581838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.873 [2024-12-15 13:09:33.581845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.873 [2024-12-15 13:09:33.581848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=8192, cccid=5 00:29:25.873 [2024-12-15 13:09:33.581855] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423cc0) on tqpair(0x13b7ed0): expected_datao=0, payload_size=8192 00:29:25.873 [2024-12-15 13:09:33.581859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581872] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.873 [2024-12-15 13:09:33.581882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.873 [2024-12-15 13:09:33.581885] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=512, cccid=4 00:29:25.873 [2024-12-15 13:09:33.581891] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423b40) on tqpair(0x13b7ed0): expected_datao=0, payload_size=512 00:29:25.873 [2024-12-15 13:09:33.581895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581900] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581903] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.873 [2024-12-15 13:09:33.581913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.873 [2024-12-15 13:09:33.581916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581919] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=512, cccid=6 00:29:25.873 [2024-12-15 13:09:33.581922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423e40) on tqpair(0x13b7ed0): expected_datao=0, payload_size=512 00:29:25.873 [2024-12-15 13:09:33.581926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581931] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581934] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:25.873 [2024-12-15 13:09:33.581944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:25.873 [2024-12-15 13:09:33.581947] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581949] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b7ed0): datao=0, datal=4096, cccid=7 00:29:25.873 [2024-12-15 13:09:33.581953] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1423fc0) on tqpair(0x13b7ed0): expected_datao=0, payload_size=4096 00:29:25.873 [2024-12-15 13:09:33.581957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581962] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.873 [2024-12-15 13:09:33.581975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.873 [2024-12-15 13:09:33.581978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.581981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423cc0) on tqpair=0x13b7ed0 00:29:25.873 [2024-12-15 13:09:33.581991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.873 [2024-12-15 13:09:33.581996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.873 [2024-12-15 13:09:33.581999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.582002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423b40) on tqpair=0x13b7ed0 00:29:25.873 [2024-12-15 13:09:33.582010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.873 [2024-12-15 13:09:33.582015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.873 [2024-12-15 13:09:33.582018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.582022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423e40) on tqpair=0x13b7ed0 00:29:25.873 [2024-12-15 13:09:33.582027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.873 [2024-12-15 13:09:33.582033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.873 [2024-12-15 13:09:33.582036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.873 [2024-12-15 13:09:33.582040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423fc0) on tqpair=0x13b7ed0 00:29:25.873 ===================================================== 00:29:25.873 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.873 ===================================================== 00:29:25.873 Controller Capabilities/Features 00:29:25.873 ================================ 00:29:25.873 Vendor ID: 8086 00:29:25.873 Subsystem Vendor ID: 8086 00:29:25.873 Serial Number: SPDK00000000000001 00:29:25.873 Model Number: SPDK bdev Controller 00:29:25.873 Firmware Version: 25.01 00:29:25.873 Recommended Arb Burst: 6 00:29:25.873 IEEE OUI Identifier: e4 d2 5c 00:29:25.873 Multi-path I/O 00:29:25.873 May have multiple subsystem ports: Yes 00:29:25.873 May have multiple controllers: Yes 00:29:25.873 Associated with SR-IOV VF: No 00:29:25.873 Max Data Transfer Size: 131072 00:29:25.873 Max Number of Namespaces: 32 00:29:25.873 Max Number of I/O Queues: 127 00:29:25.873 NVMe Specification Version (VS): 1.3 00:29:25.873 NVMe Specification Version (Identify): 1.3 00:29:25.873 Maximum Queue Entries: 128 00:29:25.873 Contiguous Queues Required: Yes 00:29:25.873 Arbitration Mechanisms Supported 00:29:25.873 Weighted Round Robin: Not Supported 00:29:25.873 Vendor Specific: Not Supported 00:29:25.873 Reset Timeout: 15000 ms 00:29:25.873 Doorbell Stride: 4 bytes 00:29:25.873 NVM Subsystem Reset: Not Supported 00:29:25.873 Command Sets Supported 00:29:25.873 NVM Command Set: Supported 00:29:25.873 Boot Partition: Not Supported 00:29:25.873 Memory Page Size Minimum: 4096 bytes 00:29:25.873 Memory Page Size Maximum: 4096 bytes 00:29:25.873 Persistent Memory Region: Not Supported 00:29:25.873 Optional Asynchronous Events Supported 00:29:25.873 Namespace Attribute Notices: Supported 00:29:25.873 Firmware Activation Notices: Not Supported 00:29:25.873 ANA Change Notices: Not Supported 00:29:25.873 PLE Aggregate Log Change Notices: Not Supported 00:29:25.873 LBA Status Info Alert Notices: Not Supported 00:29:25.873 EGE Aggregate Log Change Notices: Not Supported 00:29:25.873 Normal NVM Subsystem Shutdown event: Not Supported 00:29:25.873 Zone Descriptor Change Notices: Not Supported 00:29:25.873 Discovery Log Change Notices: Not Supported 00:29:25.873 Controller Attributes 00:29:25.873 128-bit Host Identifier: Supported 00:29:25.873 Non-Operational Permissive Mode: Not Supported 00:29:25.873 NVM Sets: Not Supported 00:29:25.873 Read Recovery Levels: Not Supported 00:29:25.873 Endurance Groups: Not Supported 00:29:25.873 Predictable Latency Mode: Not Supported 00:29:25.873 Traffic Based Keep ALive: Not Supported 00:29:25.873 Namespace Granularity: Not Supported 00:29:25.873 SQ Associations: Not Supported 00:29:25.873 UUID List: Not Supported 00:29:25.873 Multi-Domain Subsystem: Not Supported 00:29:25.873 Fixed Capacity Management: Not Supported 00:29:25.873 Variable Capacity Management: Not Supported 00:29:25.873 Delete Endurance Group: Not Supported 00:29:25.873 Delete NVM Set: Not Supported 00:29:25.873 Extended LBA Formats Supported: Not Supported 00:29:25.873 Flexible Data Placement Supported: Not Supported 00:29:25.873 00:29:25.873 Controller Memory Buffer Support 00:29:25.873 ================================ 00:29:25.873 Supported: No 00:29:25.873 00:29:25.873 Persistent Memory Region Support 00:29:25.873 ================================ 00:29:25.873 Supported: No 00:29:25.873 00:29:25.873 Admin Command Set Attributes 00:29:25.873 ============================ 00:29:25.873 Security Send/Receive: Not Supported 00:29:25.873 Format NVM: Not Supported 00:29:25.873 Firmware Activate/Download: Not Supported 00:29:25.873 Namespace Management: Not Supported 00:29:25.873 Device Self-Test: Not Supported 00:29:25.873 Directives: Not Supported 00:29:25.873 NVMe-MI: Not Supported 00:29:25.873 Virtualization Management: Not Supported 00:29:25.873 Doorbell Buffer Config: Not Supported 00:29:25.873 Get LBA Status Capability: Not Supported 00:29:25.873 Command & Feature Lockdown Capability: Not Supported 00:29:25.873 Abort Command Limit: 4 00:29:25.873 Async Event Request Limit: 4 00:29:25.873 Number of Firmware Slots: N/A 00:29:25.873 Firmware Slot 1 Read-Only: N/A 00:29:25.873 Firmware Activation Without Reset: N/A 00:29:25.873 Multiple Update Detection Support: N/A 00:29:25.873 Firmware Update Granularity: No Information Provided 00:29:25.873 Per-Namespace SMART Log: No 00:29:25.873 Asymmetric Namespace Access Log Page: Not Supported 00:29:25.873 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:25.873 Command Effects Log Page: Supported 00:29:25.873 Get Log Page Extended Data: Supported 00:29:25.873 Telemetry Log Pages: Not Supported 00:29:25.873 Persistent Event Log Pages: Not Supported 00:29:25.873 Supported Log Pages Log Page: May Support 00:29:25.873 Commands Supported & Effects Log Page: Not Supported 00:29:25.873 Feature Identifiers & Effects Log Page:May Support 00:29:25.873 NVMe-MI Commands & Effects Log Page: May Support 00:29:25.873 Data Area 4 for Telemetry Log: Not Supported 00:29:25.873 Error Log Page Entries Supported: 128 00:29:25.874 Keep Alive: Supported 00:29:25.874 Keep Alive Granularity: 10000 ms 00:29:25.874 00:29:25.874 NVM Command Set Attributes 00:29:25.874 ========================== 00:29:25.874 Submission Queue Entry Size 00:29:25.874 Max: 64 00:29:25.874 Min: 64 00:29:25.874 Completion Queue Entry Size 00:29:25.874 Max: 16 00:29:25.874 Min: 16 00:29:25.874 Number of Namespaces: 32 00:29:25.874 Compare Command: Supported 00:29:25.874 Write Uncorrectable Command: Not Supported 00:29:25.874 Dataset Management Command: Supported 00:29:25.874 Write Zeroes Command: Supported 00:29:25.874 Set Features Save Field: Not Supported 00:29:25.874 Reservations: Supported 00:29:25.874 Timestamp: Not Supported 00:29:25.874 Copy: Supported 00:29:25.874 Volatile Write Cache: Present 00:29:25.874 Atomic Write Unit (Normal): 1 00:29:25.874 Atomic Write Unit (PFail): 1 00:29:25.874 Atomic Compare & Write Unit: 1 00:29:25.874 Fused Compare & Write: Supported 00:29:25.874 Scatter-Gather List 00:29:25.874 SGL Command Set: Supported 00:29:25.874 SGL Keyed: Supported 00:29:25.874 SGL Bit Bucket Descriptor: Not Supported 00:29:25.874 SGL Metadata Pointer: Not Supported 00:29:25.874 Oversized SGL: Not Supported 00:29:25.874 SGL Metadata Address: Not Supported 00:29:25.874 SGL Offset: Supported 00:29:25.874 Transport SGL Data Block: Not Supported 00:29:25.874 Replay Protected Memory Block: Not Supported 00:29:25.874 00:29:25.874 Firmware Slot Information 00:29:25.874 ========================= 00:29:25.874 Active slot: 1 00:29:25.874 Slot 1 Firmware Revision: 25.01 00:29:25.874 00:29:25.874 00:29:25.874 Commands Supported and Effects 00:29:25.874 ============================== 00:29:25.874 Admin Commands 00:29:25.874 -------------- 00:29:25.874 Get Log Page (02h): Supported 00:29:25.874 Identify (06h): Supported 00:29:25.874 Abort (08h): Supported 00:29:25.874 Set Features (09h): Supported 00:29:25.874 Get Features (0Ah): Supported 00:29:25.874 Asynchronous Event Request (0Ch): Supported 00:29:25.874 Keep Alive (18h): Supported 00:29:25.874 I/O Commands 00:29:25.874 ------------ 00:29:25.874 Flush (00h): Supported LBA-Change 00:29:25.874 Write (01h): Supported LBA-Change 00:29:25.874 Read (02h): Supported 00:29:25.874 Compare (05h): Supported 00:29:25.874 Write Zeroes (08h): Supported LBA-Change 00:29:25.874 Dataset Management (09h): Supported LBA-Change 00:29:25.874 Copy (19h): Supported LBA-Change 00:29:25.874 00:29:25.874 Error Log 00:29:25.874 ========= 00:29:25.874 00:29:25.874 Arbitration 00:29:25.874 =========== 00:29:25.874 Arbitration Burst: 1 00:29:25.874 00:29:25.874 Power Management 00:29:25.874 ================ 00:29:25.874 Number of Power States: 1 00:29:25.874 Current Power State: Power State #0 00:29:25.874 Power State #0: 00:29:25.874 Max Power: 0.00 W 00:29:25.874 Non-Operational State: Operational 00:29:25.874 Entry Latency: Not Reported 00:29:25.874 Exit Latency: Not Reported 00:29:25.874 Relative Read Throughput: 0 00:29:25.874 Relative Read Latency: 0 00:29:25.874 Relative Write Throughput: 0 00:29:25.874 Relative Write Latency: 0 00:29:25.874 Idle Power: Not Reported 00:29:25.874 Active Power: Not Reported 00:29:25.874 Non-Operational Permissive Mode: Not Supported 00:29:25.874 00:29:25.874 Health Information 00:29:25.874 ================== 00:29:25.874 Critical Warnings: 00:29:25.874 Available Spare Space: OK 00:29:25.874 Temperature: OK 00:29:25.874 Device Reliability: OK 00:29:25.874 Read Only: No 00:29:25.874 Volatile Memory Backup: OK 00:29:25.874 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:25.874 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:25.874 Available Spare: 0% 00:29:25.874 Available Spare Threshold: 0% 00:29:25.874 Life Percentage Used:[2024-12-15 13:09:33.582121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1423fc0, cid 7, qid 0 00:29:25.874 [2024-12-15 13:09:33.582298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.874 [2024-12-15 13:09:33.582303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.874 [2024-12-15 13:09:33.582306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423fc0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582338] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:25.874 [2024-12-15 13:09:33.582348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423540) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.874 [2024-12-15 13:09:33.582358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14236c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.874 [2024-12-15 13:09:33.582366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1423840) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.874 [2024-12-15 13:09:33.582374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.874 [2024-12-15 13:09:33.582385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.874 [2024-12-15 13:09:33.582475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.874 [2024-12-15 13:09:33.582481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.874 [2024-12-15 13:09:33.582484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.874 [2024-12-15 13:09:33.582586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.874 [2024-12-15 13:09:33.582592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.874 [2024-12-15 13:09:33.582595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582602] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:25.874 [2024-12-15 13:09:33.582606] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:25.874 [2024-12-15 13:09:33.582614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.874 [2024-12-15 13:09:33.582698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.874 [2024-12-15 13:09:33.582704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.874 [2024-12-15 13:09:33.582707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.874 [2024-12-15 13:09:33.582806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.874 [2024-12-15 13:09:33.582812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.874 [2024-12-15 13:09:33.582815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.874 [2024-12-15 13:09:33.582831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.874 [2024-12-15 13:09:33.582838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.874 [2024-12-15 13:09:33.582843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.874 [2024-12-15 13:09:33.582852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.582915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.582921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.582924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.582927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.582935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.582938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.582941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.582947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.582958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.583944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.583949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.583952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.583963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.583970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.583975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.583984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.584051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.584056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.875 [2024-12-15 13:09:33.584059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.584062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.875 [2024-12-15 13:09:33.584070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.584074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.875 [2024-12-15 13:09:33.584077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.875 [2024-12-15 13:09:33.584082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.875 [2024-12-15 13:09:33.584091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.875 [2024-12-15 13:09:33.584152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.875 [2024-12-15 13:09:33.584158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.584884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.584896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.584902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.584907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.584916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.584993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.584999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.585001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.585013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.585024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.585034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.585093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.585098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.585101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.585113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.585125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.585134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.585194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.585200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.585203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.585214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.585225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.585234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.585296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.585301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.585304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.585315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.876 [2024-12-15 13:09:33.585327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.876 [2024-12-15 13:09:33.585336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.876 [2024-12-15 13:09:33.585394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.876 [2024-12-15 13:09:33.585400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.876 [2024-12-15 13:09:33.585403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.876 [2024-12-15 13:09:33.585406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.876 [2024-12-15 13:09:33.585414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.877 [2024-12-15 13:09:33.585425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.877 [2024-12-15 13:09:33.585434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.877 [2024-12-15 13:09:33.585493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.877 [2024-12-15 13:09:33.585498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.877 [2024-12-15 13:09:33.585501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.877 [2024-12-15 13:09:33.585512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.877 [2024-12-15 13:09:33.585525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.877 [2024-12-15 13:09:33.585535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.877 [2024-12-15 13:09:33.585593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.877 [2024-12-15 13:09:33.585598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.877 [2024-12-15 13:09:33.585601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.877 [2024-12-15 13:09:33.585612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.877 [2024-12-15 13:09:33.585624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.877 [2024-12-15 13:09:33.585632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.877 [2024-12-15 13:09:33.585693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.877 [2024-12-15 13:09:33.585698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.877 [2024-12-15 13:09:33.585701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.877 [2024-12-15 13:09:33.585712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.877 [2024-12-15 13:09:33.585724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.877 [2024-12-15 13:09:33.585732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.877 [2024-12-15 13:09:33.585792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.877 [2024-12-15 13:09:33.585797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.877 [2024-12-15 13:09:33.585800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.877 [2024-12-15 13:09:33.585811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.585818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b7ed0) 00:29:25.877 [2024-12-15 13:09:33.585823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.877 [2024-12-15 13:09:33.589839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14239c0, cid 3, qid 0 00:29:25.877 [2024-12-15 13:09:33.589905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:25.877 [2024-12-15 13:09:33.589911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:25.877 [2024-12-15 13:09:33.589914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:25.877 [2024-12-15 13:09:33.589917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14239c0) on tqpair=0x13b7ed0 00:29:25.877 [2024-12-15 13:09:33.589925] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:25.877 0% 00:29:25.877 Data Units Read: 0 00:29:25.877 Data Units Written: 0 00:29:25.877 Host Read Commands: 0 00:29:25.877 Host Write Commands: 0 00:29:25.877 Controller Busy Time: 0 minutes 00:29:25.877 Power Cycles: 0 00:29:25.877 Power On Hours: 0 hours 00:29:25.877 Unsafe Shutdowns: 0 00:29:25.877 Unrecoverable Media Errors: 0 00:29:25.877 Lifetime Error Log Entries: 0 00:29:25.877 Warning Temperature Time: 0 minutes 00:29:25.877 Critical Temperature Time: 0 minutes 00:29:25.877 00:29:25.877 Number of Queues 00:29:25.877 ================ 00:29:25.877 Number of I/O Submission Queues: 127 00:29:25.877 Number of I/O Completion Queues: 127 00:29:25.877 00:29:25.877 Active Namespaces 00:29:25.877 ================= 00:29:25.877 Namespace ID:1 00:29:25.877 Error Recovery Timeout: Unlimited 00:29:25.877 Command Set Identifier: NVM (00h) 00:29:25.877 Deallocate: Supported 00:29:25.877 Deallocated/Unwritten Error: Not Supported 00:29:25.877 Deallocated Read Value: Unknown 00:29:25.877 Deallocate in Write Zeroes: Not Supported 00:29:25.877 Deallocated Guard Field: 0xFFFF 00:29:25.877 Flush: Supported 00:29:25.877 Reservation: Supported 00:29:25.877 Namespace Sharing Capabilities: Multiple Controllers 00:29:25.877 Size (in LBAs): 131072 (0GiB) 00:29:25.877 Capacity (in LBAs): 131072 (0GiB) 00:29:25.877 Utilization (in LBAs): 131072 (0GiB) 00:29:25.877 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:25.877 EUI64: ABCDEF0123456789 00:29:25.877 UUID: 7b0cd545-13ba-480a-9930-e2f1ea3e03e9 00:29:25.877 Thin Provisioning: Not Supported 00:29:25.877 Per-NS Atomic Units: Yes 00:29:25.877 Atomic Boundary Size (Normal): 0 00:29:25.877 Atomic Boundary Size (PFail): 0 00:29:25.877 Atomic Boundary Offset: 0 00:29:25.877 Maximum Single Source Range Length: 65535 00:29:25.877 Maximum Copy Length: 65535 00:29:25.877 Maximum Source Range Count: 1 00:29:25.877 NGUID/EUI64 Never Reused: No 00:29:25.877 Namespace Write Protected: No 00:29:25.877 Number of LBA Formats: 1 00:29:25.877 Current LBA Format: LBA Format #00 00:29:25.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:25.877 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.877 rmmod nvme_tcp 00:29:25.877 rmmod nvme_fabrics 00:29:25.877 rmmod nvme_keyring 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1116606 ']' 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1116606 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1116606 ']' 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1116606 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116606 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.877 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116606' 00:29:25.877 killing process with pid 1116606 00:29:25.878 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1116606 00:29:25.878 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1116606 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.137 13:09:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.673 13:09:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.673 00:29:28.673 real 0m9.273s 00:29:28.673 user 0m5.600s 00:29:28.673 sys 0m4.767s 00:29:28.673 13:09:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.673 13:09:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.673 ************************************ 00:29:28.673 END TEST nvmf_identify 00:29:28.673 ************************************ 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.673 ************************************ 00:29:28.673 START TEST nvmf_perf 00:29:28.673 ************************************ 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:28.673 * Looking for test storage... 00:29:28.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.673 --rc genhtml_branch_coverage=1 00:29:28.673 --rc genhtml_function_coverage=1 00:29:28.673 --rc genhtml_legend=1 00:29:28.673 --rc geninfo_all_blocks=1 00:29:28.673 --rc geninfo_unexecuted_blocks=1 00:29:28.673 00:29:28.673 ' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.673 --rc genhtml_branch_coverage=1 00:29:28.673 --rc genhtml_function_coverage=1 00:29:28.673 --rc genhtml_legend=1 00:29:28.673 --rc geninfo_all_blocks=1 00:29:28.673 --rc geninfo_unexecuted_blocks=1 00:29:28.673 00:29:28.673 ' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.673 --rc genhtml_branch_coverage=1 00:29:28.673 --rc genhtml_function_coverage=1 00:29:28.673 --rc genhtml_legend=1 00:29:28.673 --rc geninfo_all_blocks=1 00:29:28.673 --rc geninfo_unexecuted_blocks=1 00:29:28.673 00:29:28.673 ' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.673 --rc genhtml_branch_coverage=1 00:29:28.673 --rc genhtml_function_coverage=1 00:29:28.673 --rc genhtml_legend=1 00:29:28.673 --rc geninfo_all_blocks=1 00:29:28.673 --rc geninfo_unexecuted_blocks=1 00:29:28.673 00:29:28.673 ' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.673 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.674 13:09:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.950 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:33.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:33.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:33.951 Found net devices under 0000:af:00.0: cvl_0_0 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:33.951 Found net devices under 0000:af:00.1: cvl_0_1 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.951 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.211 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.211 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.211 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.211 13:09:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:29:34.211 00:29:34.211 --- 10.0.0.2 ping statistics --- 00:29:34.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.211 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:29:34.211 00:29:34.211 --- 10.0.0.1 ping statistics --- 00:29:34.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.211 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.211 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1120101 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1120101 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1120101 ']' 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.470 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:34.470 [2024-12-15 13:09:42.185057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:34.471 [2024-12-15 13:09:42.185110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.471 [2024-12-15 13:09:42.264745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.471 [2024-12-15 13:09:42.287991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.471 [2024-12-15 13:09:42.288032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.471 [2024-12-15 13:09:42.288039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.471 [2024-12-15 13:09:42.288046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.471 [2024-12-15 13:09:42.288051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.471 [2024-12-15 13:09:42.289557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.471 [2024-12-15 13:09:42.289666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.471 [2024-12-15 13:09:42.289780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.471 [2024-12-15 13:09:42.289781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:34.730 13:09:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:38.019 13:09:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:38.277 [2024-12-15 13:09:46.067005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.277 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.536 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:38.536 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.794 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:38.794 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:38.794 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.053 [2024-12-15 13:09:46.867280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.053 13:09:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.312 13:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:39.312 13:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:39.312 13:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:39.312 13:09:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:40.688 Initializing NVMe Controllers 00:29:40.688 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:40.688 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:40.688 Initialization complete. Launching workers. 00:29:40.688 ======================================================== 00:29:40.688 Latency(us) 00:29:40.688 Device Information : IOPS MiB/s Average min max 00:29:40.688 PCIE (0000:5e:00.0) NSID 1 from core 0: 97797.25 382.02 326.55 9.51 4570.92 00:29:40.688 ======================================================== 00:29:40.688 Total : 97797.25 382.02 326.55 9.51 4570.92 00:29:40.688 00:29:40.688 13:09:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.064 Initializing NVMe Controllers 00:29:42.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:42.064 Initialization complete. Launching workers. 00:29:42.064 ======================================================== 00:29:42.064 Latency(us) 00:29:42.064 Device Information : IOPS MiB/s Average min max 00:29:42.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 112.00 0.44 9266.65 102.88 45849.86 00:29:42.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19239.77 5987.39 47894.44 00:29:42.064 ======================================================== 00:29:42.064 Total : 165.00 0.64 12470.14 102.88 47894.44 00:29:42.064 00:29:42.064 13:09:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.000 Initializing NVMe Controllers 00:29:43.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.000 Initialization complete. Launching workers. 00:29:43.000 ======================================================== 00:29:43.000 Latency(us) 00:29:43.000 Device Information : IOPS MiB/s Average min max 00:29:43.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11253.00 43.96 2843.43 463.92 6257.36 00:29:43.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3771.00 14.73 8520.59 6725.88 16141.22 00:29:43.000 ======================================================== 00:29:43.000 Total : 15024.00 58.69 4268.39 463.92 16141.22 00:29:43.000 00:29:43.259 13:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:43.259 13:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:43.259 13:09:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.795 Initializing NVMe Controllers 00:29:45.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.795 Controller IO queue size 128, less than required. 00:29:45.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:45.795 Controller IO queue size 128, less than required. 00:29:45.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:45.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:45.795 Initialization complete. Launching workers. 00:29:45.795 ======================================================== 00:29:45.795 Latency(us) 00:29:45.795 Device Information : IOPS MiB/s Average min max 00:29:45.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1882.33 470.58 69163.60 50581.58 102423.19 00:29:45.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.45 150.36 223575.11 96086.99 375090.24 00:29:45.795 ======================================================== 00:29:45.795 Total : 2483.78 620.95 106554.31 50581.58 375090.24 00:29:45.795 00:29:45.795 13:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:46.363 No valid NVMe controllers or AIO or URING devices found 00:29:46.363 Initializing NVMe Controllers 00:29:46.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.363 Controller IO queue size 128, less than required. 00:29:46.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.363 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:46.363 Controller IO queue size 128, less than required. 00:29:46.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.363 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:46.363 WARNING: Some requested NVMe devices were skipped 00:29:46.363 13:09:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:48.898 Initializing NVMe Controllers 00:29:48.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.898 Controller IO queue size 128, less than required. 00:29:48.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:48.898 Controller IO queue size 128, less than required. 00:29:48.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:48.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:48.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:48.898 Initialization complete. Launching workers. 00:29:48.898 00:29:48.898 ==================== 00:29:48.898 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:48.898 TCP transport: 00:29:48.898 polls: 12206 00:29:48.898 idle_polls: 8071 00:29:48.898 sock_completions: 4135 00:29:48.898 nvme_completions: 6103 00:29:48.898 submitted_requests: 9150 00:29:48.898 queued_requests: 1 00:29:48.898 00:29:48.898 ==================== 00:29:48.898 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:48.898 TCP transport: 00:29:48.898 polls: 12390 00:29:48.898 idle_polls: 7943 00:29:48.898 sock_completions: 4447 00:29:48.898 nvme_completions: 6715 00:29:48.898 submitted_requests: 10120 00:29:48.898 queued_requests: 1 00:29:48.898 ======================================================== 00:29:48.898 Latency(us) 00:29:48.898 Device Information : IOPS MiB/s Average min max 00:29:48.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1522.37 380.59 86227.97 52198.42 144788.70 00:29:48.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1675.06 418.77 76617.66 41376.09 134771.69 00:29:48.898 ======================================================== 00:29:48.898 Total : 3197.43 799.36 81193.35 41376.09 144788.70 00:29:48.898 00:29:48.898 13:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:48.898 13:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.898 13:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:48.898 13:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:48.898 13:09:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=f5a269ce-e8d1-4178-9c41-91afaac0510f 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb f5a269ce-e8d1-4178-9c41-91afaac0510f 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=f5a269ce-e8d1-4178-9c41-91afaac0510f 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:52.187 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.445 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:52.445 { 00:29:52.445 "uuid": "f5a269ce-e8d1-4178-9c41-91afaac0510f", 00:29:52.446 "name": "lvs_0", 00:29:52.446 "base_bdev": "Nvme0n1", 00:29:52.446 "total_data_clusters": 238234, 00:29:52.446 "free_clusters": 238234, 00:29:52.446 "block_size": 512, 00:29:52.446 "cluster_size": 4194304 00:29:52.446 } 00:29:52.446 ]' 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f5a269ce-e8d1-4178-9c41-91afaac0510f") .free_clusters' 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f5a269ce-e8d1-4178-9c41-91afaac0510f") .cluster_size' 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:52.446 952936 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:52.446 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5a269ce-e8d1-4178-9c41-91afaac0510f lbd_0 20480 00:29:53.012 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e8edca7b-f6e1-4a7e-b6e9-8ab6d1ad14e0 00:29:53.012 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e8edca7b-f6e1-4a7e-b6e9-8ab6d1ad14e0 lvs_n_0 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c9a1733f-c494-429e-8ea8-9ada8bee7cb9 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c9a1733f-c494-429e-8ea8-9ada8bee7cb9 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=c9a1733f-c494-429e-8ea8-9ada8bee7cb9 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:53.579 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:53.838 { 00:29:53.838 "uuid": "f5a269ce-e8d1-4178-9c41-91afaac0510f", 00:29:53.838 "name": "lvs_0", 00:29:53.838 "base_bdev": "Nvme0n1", 00:29:53.838 "total_data_clusters": 238234, 00:29:53.838 "free_clusters": 233114, 00:29:53.838 "block_size": 512, 00:29:53.838 "cluster_size": 4194304 00:29:53.838 }, 00:29:53.838 { 00:29:53.838 "uuid": "c9a1733f-c494-429e-8ea8-9ada8bee7cb9", 00:29:53.838 "name": "lvs_n_0", 00:29:53.838 "base_bdev": "e8edca7b-f6e1-4a7e-b6e9-8ab6d1ad14e0", 00:29:53.838 "total_data_clusters": 5114, 00:29:53.838 "free_clusters": 5114, 00:29:53.838 "block_size": 512, 00:29:53.838 "cluster_size": 4194304 00:29:53.838 } 00:29:53.838 ]' 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c9a1733f-c494-429e-8ea8-9ada8bee7cb9") .free_clusters' 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c9a1733f-c494-429e-8ea8-9ada8bee7cb9") .cluster_size' 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:29:53.838 20456 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:53.838 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9a1733f-c494-429e-8ea8-9ada8bee7cb9 lbd_nest_0 20456 00:29:54.097 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=37b8783b-677f-4ec6-873f-75d06d6f8b67 00:29:54.097 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.356 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:54.356 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 37b8783b-677f-4ec6-873f-75d06d6f8b67 00:29:54.356 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.615 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:54.615 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:54.615 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:54.615 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:54.615 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.827 Initializing NVMe Controllers 00:30:06.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.827 Initialization complete. Launching workers. 00:30:06.827 ======================================================== 00:30:06.827 Latency(us) 00:30:06.827 Device Information : IOPS MiB/s Average min max 00:30:06.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.98 0.02 22299.22 125.47 45694.27 00:30:06.827 ======================================================== 00:30:06.827 Total : 44.98 0.02 22299.22 125.47 45694.27 00:30:06.827 00:30:06.827 13:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:06.827 13:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.806 Initializing NVMe Controllers 00:30:16.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:16.806 Initialization complete. Launching workers. 00:30:16.806 ======================================================== 00:30:16.806 Latency(us) 00:30:16.806 Device Information : IOPS MiB/s Average min max 00:30:16.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.20 9.03 13868.41 6031.19 47890.64 00:30:16.806 ======================================================== 00:30:16.806 Total : 72.20 9.03 13868.41 6031.19 47890.64 00:30:16.806 00:30:16.806 13:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:16.806 13:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:16.806 13:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:26.812 Initializing NVMe Controllers 00:30:26.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:26.812 Initialization complete. Launching workers. 00:30:26.812 ======================================================== 00:30:26.812 Latency(us) 00:30:26.812 Device Information : IOPS MiB/s Average min max 00:30:26.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8652.43 4.22 3699.22 247.42 9952.67 00:30:26.812 ======================================================== 00:30:26.812 Total : 8652.43 4.22 3699.22 247.42 9952.67 00:30:26.812 00:30:26.812 13:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:26.812 13:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.891 Initializing NVMe Controllers 00:30:36.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.891 Initialization complete. Launching workers. 00:30:36.891 ======================================================== 00:30:36.891 Latency(us) 00:30:36.891 Device Information : IOPS MiB/s Average min max 00:30:36.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4405.70 550.71 7267.28 707.59 16401.66 00:30:36.891 ======================================================== 00:30:36.891 Total : 4405.70 550.71 7267.28 707.59 16401.66 00:30:36.891 00:30:36.891 13:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:36.891 13:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:36.891 13:10:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.865 Initializing NVMe Controllers 00:30:46.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.865 Controller IO queue size 128, less than required. 00:30:46.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.865 Initialization complete. Launching workers. 00:30:46.865 ======================================================== 00:30:46.865 Latency(us) 00:30:46.865 Device Information : IOPS MiB/s Average min max 00:30:46.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15818.64 7.72 8094.31 1352.17 24279.34 00:30:46.865 ======================================================== 00:30:46.865 Total : 15818.64 7.72 8094.31 1352.17 24279.34 00:30:46.865 00:30:46.865 13:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.865 13:10:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:56.843 Initializing NVMe Controllers 00:30:56.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.843 Controller IO queue size 128, less than required. 00:30:56.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:56.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:56.843 Initialization complete. Launching workers. 00:30:56.843 ======================================================== 00:30:56.843 Latency(us) 00:30:56.843 Device Information : IOPS MiB/s Average min max 00:30:56.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1207.00 150.87 106424.44 16387.71 207348.59 00:30:56.843 ======================================================== 00:30:56.843 Total : 1207.00 150.87 106424.44 16387.71 207348.59 00:30:56.843 00:30:56.843 13:11:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.843 13:11:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37b8783b-677f-4ec6-873f-75d06d6f8b67 00:30:57.781 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:57.781 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8edca7b-f6e1-4a7e-b6e9-8ab6d1ad14e0 00:30:58.040 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:58.299 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:58.299 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:58.299 13:11:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.299 rmmod nvme_tcp 00:30:58.299 rmmod nvme_fabrics 00:30:58.299 rmmod nvme_keyring 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1120101 ']' 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1120101 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1120101 ']' 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1120101 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120101 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120101' 00:30:58.299 killing process with pid 1120101 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1120101 00:30:58.299 13:11:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1120101 00:30:59.676 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.676 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.676 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.676 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.935 13:11:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.841 00:31:01.841 real 1m33.590s 00:31:01.841 user 5m33.761s 00:31:01.841 sys 0m17.291s 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:01.841 ************************************ 00:31:01.841 END TEST nvmf_perf 00:31:01.841 ************************************ 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.841 ************************************ 00:31:01.841 START TEST nvmf_fio_host 00:31:01.841 ************************************ 00:31:01.841 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:02.101 * Looking for test storage... 00:31:02.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:02.101 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:02.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.102 --rc genhtml_branch_coverage=1 00:31:02.102 --rc genhtml_function_coverage=1 00:31:02.102 --rc genhtml_legend=1 00:31:02.102 --rc geninfo_all_blocks=1 00:31:02.102 --rc geninfo_unexecuted_blocks=1 00:31:02.102 00:31:02.102 ' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:02.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.102 --rc genhtml_branch_coverage=1 00:31:02.102 --rc genhtml_function_coverage=1 00:31:02.102 --rc genhtml_legend=1 00:31:02.102 --rc geninfo_all_blocks=1 00:31:02.102 --rc geninfo_unexecuted_blocks=1 00:31:02.102 00:31:02.102 ' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:02.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.102 --rc genhtml_branch_coverage=1 00:31:02.102 --rc genhtml_function_coverage=1 00:31:02.102 --rc genhtml_legend=1 00:31:02.102 --rc geninfo_all_blocks=1 00:31:02.102 --rc geninfo_unexecuted_blocks=1 00:31:02.102 00:31:02.102 ' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:02.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.102 --rc genhtml_branch_coverage=1 00:31:02.102 --rc genhtml_function_coverage=1 00:31:02.102 --rc genhtml_legend=1 00:31:02.102 --rc geninfo_all_blocks=1 00:31:02.102 --rc geninfo_unexecuted_blocks=1 00:31:02.102 00:31:02.102 ' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:02.102 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.103 13:11:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.676 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:08.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:08.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:08.677 Found net devices under 0000:af:00.0: cvl_0_0 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:08.677 Found net devices under 0000:af:00.1: cvl_0_1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:31:08.677 00:31:08.677 --- 10.0.0.2 ping statistics --- 00:31:08.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.677 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:31:08.677 00:31:08.677 --- 10.0.0.1 ping statistics --- 00:31:08.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.677 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1136994 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1136994 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1136994 ']' 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.677 13:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.677 [2024-12-15 13:11:15.925277] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:08.677 [2024-12-15 13:11:15.925318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.678 [2024-12-15 13:11:16.006208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.678 [2024-12-15 13:11:16.029265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.678 [2024-12-15 13:11:16.029303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.678 [2024-12-15 13:11:16.029310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.678 [2024-12-15 13:11:16.029316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.678 [2024-12-15 13:11:16.029321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.678 [2024-12-15 13:11:16.030782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.678 [2024-12-15 13:11:16.030892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.678 [2024-12-15 13:11:16.030933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.678 [2024-12-15 13:11:16.030934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:08.678 [2024-12-15 13:11:16.303064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.678 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:08.678 Malloc1 00:31:08.936 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.936 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:09.194 13:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.452 [2024-12-15 13:11:17.132656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.452 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:09.734 13:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:09.992 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:09.992 fio-3.35 00:31:09.992 Starting 1 thread 00:31:12.516 00:31:12.516 test: (groupid=0, jobs=1): err= 0: pid=1137571: Sun Dec 15 13:11:20 2024 00:31:12.516 read: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.7MiB/2004msec) 00:31:12.516 slat (nsec): min=1494, max=249527, avg=1674.37, stdev=2216.91 00:31:12.516 clat (usec): min=3157, max=10635, avg=5910.76, stdev=472.60 00:31:12.516 lat (usec): min=3191, max=10637, avg=5912.44, stdev=472.54 00:31:12.516 clat percentiles (usec): 00:31:12.516 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:31:12.516 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:31:12.516 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:31:12.516 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9896], 00:31:12.516 | 99.99th=[10159] 00:31:12.516 bw ( KiB/s): min=47336, max=48152, per=99.90%, avg=47848.00, stdev=384.67, samples=4 00:31:12.516 iops : min=11834, max=12036, avg=11961.50, stdev=95.64, samples=4 00:31:12.516 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.3MiB/2004msec); 0 zone resets 00:31:12.516 slat (nsec): min=1543, max=224238, avg=1739.75, stdev=1625.95 00:31:12.516 clat (usec): min=2410, max=9464, avg=4781.13, stdev=381.85 00:31:12.516 lat (usec): min=2425, max=9465, avg=4782.87, stdev=381.87 00:31:12.516 clat percentiles (usec): 00:31:12.516 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:31:12.516 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:31:12.516 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:31:12.516 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 7177], 99.95th=[ 7963], 00:31:12.516 | 99.99th=[ 8979] 00:31:12.516 bw ( KiB/s): min=47040, max=48168, per=99.97%, avg=47668.00, stdev=520.86, samples=4 00:31:12.516 iops : min=11760, max=12042, avg=11917.00, stdev=130.22, samples=4 00:31:12.516 lat (msec) : 4=0.90%, 10=99.09%, 20=0.02% 00:31:12.516 cpu : usr=73.39%, sys=25.71%, ctx=109, majf=0, minf=3 00:31:12.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:12.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:12.516 issued rwts: total=23996,23889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:12.516 00:31:12.516 Run status group 0 (all jobs): 00:31:12.516 READ: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.3MB), run=2004-2004msec 00:31:12.516 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.8MB), run=2004-2004msec 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:12.516 13:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:12.773 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:12.773 fio-3.35 00:31:12.773 Starting 1 thread 00:31:15.297 00:31:15.297 test: (groupid=0, jobs=1): err= 0: pid=1138132: Sun Dec 15 13:11:22 2024 00:31:15.297 read: IOPS=11.0k, BW=172MiB/s (180MB/s)(344MiB/2007msec) 00:31:15.297 slat (nsec): min=2459, max=98664, avg=2780.79, stdev=1307.50 00:31:15.297 clat (usec): min=1205, max=13428, avg=6717.41, stdev=1627.08 00:31:15.297 lat (usec): min=1208, max=13431, avg=6720.19, stdev=1627.22 00:31:15.297 clat percentiles (usec): 00:31:15.297 | 1.00th=[ 3556], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:31:15.297 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:31:15.297 | 70.00th=[ 7439], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[ 9634], 00:31:15.297 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12387], 99.95th=[12518], 00:31:15.297 | 99.99th=[13042] 00:31:15.297 bw ( KiB/s): min=85536, max=96638, per=50.77%, avg=89175.50, stdev=5153.81, samples=4 00:31:15.297 iops : min= 5346, max= 6039, avg=5573.25, stdev=321.69, samples=4 00:31:15.297 write: IOPS=6497, BW=102MiB/s (106MB/s)(183MiB/1799msec); 0 zone resets 00:31:15.297 slat (usec): min=28, max=387, avg=31.33, stdev= 7.61 00:31:15.297 clat (usec): min=3969, max=15070, avg=8555.60, stdev=1503.88 00:31:15.297 lat (usec): min=4003, max=15181, avg=8586.93, stdev=1505.40 00:31:15.297 clat percentiles (usec): 00:31:15.297 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:15.297 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:31:15.297 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:31:15.297 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14746], 99.95th=[14877], 00:31:15.297 | 99.99th=[15008] 00:31:15.297 bw ( KiB/s): min=89312, max=100598, per=89.34%, avg=92877.50, stdev=5203.26, samples=4 00:31:15.297 iops : min= 5582, max= 6287, avg=5804.75, stdev=325.02, samples=4 00:31:15.297 lat (msec) : 2=0.02%, 4=1.84%, 10=89.91%, 20=8.23% 00:31:15.297 cpu : usr=86.24%, sys=13.06%, ctx=39, majf=0, minf=3 00:31:15.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:15.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:15.297 issued rwts: total=22032,11689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:15.297 00:31:15.297 Run status group 0 (all jobs): 00:31:15.297 READ: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2007-2007msec 00:31:15.297 WRITE: bw=102MiB/s (106MB/s), 102MiB/s-102MiB/s (106MB/s-106MB/s), io=183MiB (192MB), run=1799-1799msec 00:31:15.297 13:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:15.297 13:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:18.573 Nvme0n1 00:31:18.573 13:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=da81bfaf-4a56-4eaf-ae99-22131c65810a 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb da81bfaf-4a56-4eaf-ae99-22131c65810a 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=da81bfaf-4a56-4eaf-ae99-22131c65810a 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:21.846 { 00:31:21.846 "uuid": "da81bfaf-4a56-4eaf-ae99-22131c65810a", 00:31:21.846 "name": "lvs_0", 00:31:21.846 "base_bdev": "Nvme0n1", 00:31:21.846 "total_data_clusters": 930, 00:31:21.846 "free_clusters": 930, 00:31:21.846 "block_size": 512, 00:31:21.846 "cluster_size": 1073741824 00:31:21.846 } 00:31:21.846 ]' 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="da81bfaf-4a56-4eaf-ae99-22131c65810a") .free_clusters' 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="da81bfaf-4a56-4eaf-ae99-22131c65810a") .cluster_size' 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:21.846 952320 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:21.846 bcf42b64-3d36-41b9-95b2-fd6c5297760a 00:31:21.846 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:22.103 13:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:22.362 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:22.637 13:11:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.895 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:22.895 fio-3.35 00:31:22.895 Starting 1 thread 00:31:25.419 [2024-12-15 13:11:33.014740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223aee0 is same with the state(6) to be set 00:31:25.419 00:31:25.419 test: (groupid=0, jobs=1): err= 0: pid=1139836: Sun Dec 15 13:11:33 2024 00:31:25.419 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2005msec) 00:31:25.419 slat (nsec): min=1516, max=101955, avg=1645.02, stdev=1129.18 00:31:25.419 clat (usec): min=570, max=169931, avg=8633.57, stdev=10203.37 00:31:25.419 lat (usec): min=572, max=169951, avg=8635.21, stdev=10203.54 00:31:25.419 clat percentiles (msec): 00:31:25.419 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:25.419 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:31:25.419 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:31:25.419 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:25.419 | 99.99th=[ 171] 00:31:25.419 bw ( KiB/s): min=23416, max=35936, per=99.80%, avg=32638.00, stdev=6152.04, samples=4 00:31:25.419 iops : min= 5854, max= 8984, avg=8159.50, stdev=1538.01, samples=4 00:31:25.419 write: IOPS=8173, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2005msec); 0 zone resets 00:31:25.419 slat (nsec): min=1563, max=87198, avg=1704.59, stdev=740.39 00:31:25.419 clat (usec): min=206, max=168491, avg=6973.55, stdev=9528.34 00:31:25.419 lat (usec): min=208, max=168496, avg=6975.25, stdev=9528.52 00:31:25.419 clat percentiles (msec): 00:31:25.419 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:25.419 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:25.419 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:25.419 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:31:25.419 | 99.99th=[ 169] 00:31:25.419 bw ( KiB/s): min=24552, max=35520, per=99.92%, avg=32666.00, stdev=5410.93, samples=4 00:31:25.419 iops : min= 6138, max= 8880, avg=8166.50, stdev=1352.73, samples=4 00:31:25.419 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:31:25.420 lat (msec) : 2=0.04%, 4=0.23%, 10=99.15%, 20=0.15%, 250=0.39% 00:31:25.420 cpu : usr=70.81%, sys=28.44%, ctx=91, majf=0, minf=3 00:31:25.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:25.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.420 issued rwts: total=16392,16387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.420 00:31:25.420 Run status group 0 (all jobs): 00:31:25.420 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2005-2005msec 00:31:25.420 WRITE: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2005-2005msec 00:31:25.420 13:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:25.420 13:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=3f816ef5-5968-4ed6-930f-6dbaea90a195 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 3f816ef5-5968-4ed6-930f-6dbaea90a195 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=3f816ef5-5968-4ed6-930f-6dbaea90a195 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:26.791 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:26.792 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:26.792 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:26.792 { 00:31:26.792 "uuid": "da81bfaf-4a56-4eaf-ae99-22131c65810a", 00:31:26.792 "name": "lvs_0", 00:31:26.792 "base_bdev": "Nvme0n1", 00:31:26.792 "total_data_clusters": 930, 00:31:26.792 "free_clusters": 0, 00:31:26.792 "block_size": 512, 00:31:26.792 "cluster_size": 1073741824 00:31:26.792 }, 00:31:26.792 { 00:31:26.792 "uuid": "3f816ef5-5968-4ed6-930f-6dbaea90a195", 00:31:26.792 "name": "lvs_n_0", 00:31:26.792 "base_bdev": "bcf42b64-3d36-41b9-95b2-fd6c5297760a", 00:31:26.792 "total_data_clusters": 237847, 00:31:26.792 "free_clusters": 237847, 00:31:26.792 "block_size": 512, 00:31:26.792 "cluster_size": 4194304 00:31:26.792 } 00:31:26.792 ]' 00:31:26.792 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3f816ef5-5968-4ed6-930f-6dbaea90a195") .free_clusters' 00:31:26.792 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:26.792 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3f816ef5-5968-4ed6-930f-6dbaea90a195") .cluster_size' 00:31:27.049 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:27.049 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:27.049 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:27.049 951388 00:31:27.049 13:11:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:27.612 e1bdbe32-c1b4-4dce-afe4-8c5fe0a89594 00:31:27.612 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:27.612 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:27.869 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.126 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:28.127 13:11:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:28.384 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:28.384 fio-3.35 00:31:28.384 Starting 1 thread 00:31:30.911 00:31:30.911 test: (groupid=0, jobs=1): err= 0: pid=1140848: Sun Dec 15 13:11:38 2024 00:31:30.911 read: IOPS=7910, BW=30.9MiB/s (32.4MB/s)(62.0MiB/2006msec) 00:31:30.911 slat (nsec): min=1460, max=87935, avg=1645.74, stdev=1032.15 00:31:30.911 clat (usec): min=2991, max=14584, avg=8899.84, stdev=788.62 00:31:30.911 lat (usec): min=2995, max=14585, avg=8901.48, stdev=788.56 00:31:30.911 clat percentiles (usec): 00:31:30.911 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8291], 00:31:30.911 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:31:30.911 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:31:30.911 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13435], 99.95th=[14353], 00:31:30.911 | 99.99th=[14484] 00:31:30.911 bw ( KiB/s): min=30600, max=32088, per=99.84%, avg=31592.00, stdev=684.18, samples=4 00:31:30.911 iops : min= 7650, max= 8022, avg=7898.00, stdev=171.04, samples=4 00:31:30.911 write: IOPS=7884, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2006msec); 0 zone resets 00:31:30.911 slat (nsec): min=1533, max=153969, avg=1707.41, stdev=1279.38 00:31:30.911 clat (usec): min=1401, max=12607, avg=7216.85, stdev=641.47 00:31:30.911 lat (usec): min=1407, max=12608, avg=7218.56, stdev=641.46 00:31:30.911 clat percentiles (usec): 00:31:30.911 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:31:30.911 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7373], 00:31:30.911 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:31:30.911 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10552], 99.95th=[11469], 00:31:30.911 | 99.99th=[12518] 00:31:30.911 bw ( KiB/s): min=31424, max=31592, per=99.95%, avg=31522.00, stdev=77.67, samples=4 00:31:30.911 iops : min= 7856, max= 7898, avg=7880.50, stdev=19.42, samples=4 00:31:30.911 lat (msec) : 2=0.01%, 4=0.11%, 10=96.36%, 20=3.52% 00:31:30.911 cpu : usr=70.72%, sys=28.48%, ctx=124, majf=0, minf=3 00:31:30.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:30.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:30.911 issued rwts: total=15868,15816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:30.911 00:31:30.911 Run status group 0 (all jobs): 00:31:30.911 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.0MiB (65.0MB), run=2006-2006msec 00:31:30.911 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2006-2006msec 00:31:30.911 13:11:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:30.911 13:11:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:30.911 13:11:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:35.094 13:11:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:35.094 13:11:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:38.376 13:11:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:38.376 13:11:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.894 rmmod nvme_tcp 00:31:39.894 rmmod nvme_fabrics 00:31:39.894 rmmod nvme_keyring 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1136994 ']' 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1136994 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1136994 ']' 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1136994 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136994 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:39.894 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136994' 00:31:39.895 killing process with pid 1136994 00:31:39.895 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1136994 00:31:39.895 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1136994 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.169 13:11:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.075 00:31:42.075 real 0m40.210s 00:31:42.075 user 2m41.092s 00:31:42.075 sys 0m8.874s 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.075 ************************************ 00:31:42.075 END TEST nvmf_fio_host 00:31:42.075 ************************************ 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.075 13:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.335 ************************************ 00:31:42.335 START TEST nvmf_failover 00:31:42.335 ************************************ 00:31:42.335 13:11:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:42.335 * Looking for test storage... 00:31:42.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:42.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.335 --rc genhtml_branch_coverage=1 00:31:42.335 --rc genhtml_function_coverage=1 00:31:42.335 --rc genhtml_legend=1 00:31:42.335 --rc geninfo_all_blocks=1 00:31:42.335 --rc geninfo_unexecuted_blocks=1 00:31:42.335 00:31:42.335 ' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:42.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.335 --rc genhtml_branch_coverage=1 00:31:42.335 --rc genhtml_function_coverage=1 00:31:42.335 --rc genhtml_legend=1 00:31:42.335 --rc geninfo_all_blocks=1 00:31:42.335 --rc geninfo_unexecuted_blocks=1 00:31:42.335 00:31:42.335 ' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:42.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.335 --rc genhtml_branch_coverage=1 00:31:42.335 --rc genhtml_function_coverage=1 00:31:42.335 --rc genhtml_legend=1 00:31:42.335 --rc geninfo_all_blocks=1 00:31:42.335 --rc geninfo_unexecuted_blocks=1 00:31:42.335 00:31:42.335 ' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:42.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.335 --rc genhtml_branch_coverage=1 00:31:42.335 --rc genhtml_function_coverage=1 00:31:42.335 --rc genhtml_legend=1 00:31:42.335 --rc geninfo_all_blocks=1 00:31:42.335 --rc geninfo_unexecuted_blocks=1 00:31:42.335 00:31:42.335 ' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.335 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:42.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.336 13:11:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:48.906 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:48.906 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:48.906 Found net devices under 0000:af:00.0: cvl_0_0 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:48.906 Found net devices under 0000:af:00.1: cvl_0_1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.906 13:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.906 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.906 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.906 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.906 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:31:48.906 00:31:48.906 --- 10.0.0.2 ping statistics --- 00:31:48.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.906 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:31:48.907 00:31:48.907 --- 10.0.0.1 ping statistics --- 00:31:48.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.907 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1146076 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1146076 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146076 ']' 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.907 [2024-12-15 13:11:56.134258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:48.907 [2024-12-15 13:11:56.134305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.907 [2024-12-15 13:11:56.210573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.907 [2024-12-15 13:11:56.232091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.907 [2024-12-15 13:11:56.232128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.907 [2024-12-15 13:11:56.232136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.907 [2024-12-15 13:11:56.232142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.907 [2024-12-15 13:11:56.232148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.907 [2024-12-15 13:11:56.233400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.907 [2024-12-15 13:11:56.233506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.907 [2024-12-15 13:11:56.233507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:48.907 [2024-12-15 13:11:56.532596] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:48.907 Malloc0 00:31:48.907 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.165 13:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.424 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.424 [2024-12-15 13:11:57.322628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.682 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:49.682 [2024-12-15 13:11:57.527177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:49.682 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:49.940 [2024-12-15 13:11:57.723783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1146345 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1146345 /var/tmp/bdevperf.sock 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146345 ']' 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:49.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.940 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:50.198 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.198 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:50.198 13:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:50.764 NVMe0n1 00:31:50.764 13:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:51.022 00:31:51.022 13:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1146450 00:31:51.022 13:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:51.022 13:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:51.957 13:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:52.215 [2024-12-15 13:11:59.894700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.894996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 [2024-12-15 13:11:59.895238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597ac0 is same with the state(6) to be set 00:31:52.216 13:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:55.500 13:12:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:55.500 00:31:55.500 13:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:55.758 [2024-12-15 13:12:03.451886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.758 [2024-12-15 13:12:03.451955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.451995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 [2024-12-15 13:12:03.452131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15988e0 is same with the state(6) to be set 00:31:55.759 13:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:59.044 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.044 [2024-12-15 13:12:06.660217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.044 13:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:59.978 13:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:59.978 [2024-12-15 13:12:07.882840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.882997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.978 [2024-12-15 13:12:07.883107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:31:59.979 [2024-12-15 13:12:07.883434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599690 is same with the state(6) to be set 00:32:00.237 13:12:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1146450 00:32:06.805 { 00:32:06.805 "results": [ 00:32:06.805 { 00:32:06.805 "job": "NVMe0n1", 00:32:06.805 "core_mask": "0x1", 00:32:06.805 "workload": "verify", 00:32:06.805 "status": "finished", 00:32:06.805 "verify_range": { 00:32:06.805 "start": 0, 00:32:06.805 "length": 16384 00:32:06.805 }, 00:32:06.805 "queue_depth": 128, 00:32:06.805 "io_size": 4096, 00:32:06.805 "runtime": 15.011199, 00:32:06.805 "iops": 11347.261467921384, 00:32:06.805 "mibps": 44.325240109067906, 00:32:06.805 "io_failed": 11845, 00:32:06.805 "io_timeout": 0, 00:32:06.805 "avg_latency_us": 10525.113253470317, 00:32:06.805 "min_latency_us": 423.25333333333333, 00:32:06.805 "max_latency_us": 22719.146666666667 00:32:06.805 } 00:32:06.805 ], 00:32:06.805 "core_count": 1 00:32:06.805 } 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1146345 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146345 ']' 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146345 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146345 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146345' 00:32:06.805 killing process with pid 1146345 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146345 00:32:06.805 13:12:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146345 00:32:06.805 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:06.805 [2024-12-15 13:11:57.799364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:06.805 [2024-12-15 13:11:57.799418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146345 ] 00:32:06.805 [2024-12-15 13:11:57.874763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.805 [2024-12-15 13:11:57.897260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.805 Running I/O for 15 seconds... 00:32:06.805 11581.00 IOPS, 45.24 MiB/s [2024-12-15T12:12:14.712Z] [2024-12-15 13:11:59.896729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.896988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.896997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.805 [2024-12-15 13:11:59.897003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.805 [2024-12-15 13:11:59.897019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.805 [2024-12-15 13:11:59.897107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.805 [2024-12-15 13:11:59.897115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.806 [2024-12-15 13:11:59.897263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.806 [2024-12-15 13:11:59.897702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.806 [2024-12-15 13:11:59.897710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.807 [2024-12-15 13:11:59.897716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.897954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.897961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.807 [2024-12-15 13:11:59.898338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.807 [2024-12-15 13:11:59.898345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.808 [2024-12-15 13:11:59.898676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.808 [2024-12-15 13:11:59.898701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.808 [2024-12-15 13:11:59.898707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102424 len:8 PRP1 0x0 PRP2 0x0 00:32:06.808 [2024-12-15 13:11:59.898714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898758] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:06.808 [2024-12-15 13:11:59.898779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.808 [2024-12-15 13:11:59.898786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.808 [2024-12-15 13:11:59.898800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.808 [2024-12-15 13:11:59.898814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.808 [2024-12-15 13:11:59.898831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:11:59.898838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:06.808 [2024-12-15 13:11:59.898866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2460 (9): Bad file descriptor 00:32:06.808 [2024-12-15 13:11:59.901636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:06.808 [2024-12-15 13:11:59.928252] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:06.808 11423.50 IOPS, 44.62 MiB/s [2024-12-15T12:12:14.715Z] 11493.67 IOPS, 44.90 MiB/s [2024-12-15T12:12:14.715Z] 11496.50 IOPS, 44.91 MiB/s [2024-12-15T12:12:14.715Z] [2024-12-15 13:12:03.453131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.808 [2024-12-15 13:12:03.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.808 [2024-12-15 13:12:03.453301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.809 [2024-12-15 13:12:03.453743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.809 [2024-12-15 13:12:03.453758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.809 [2024-12-15 13:12:03.453772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.809 [2024-12-15 13:12:03.453780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.453991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.453999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.810 [2024-12-15 13:12:03.454376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.810 [2024-12-15 13:12:03.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.811 [2024-12-15 13:12:03.454391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.811 [2024-12-15 13:12:03.454405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.811 [2024-12-15 13:12:03.454419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.811 [2024-12-15 13:12:03.454434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.811 [2024-12-15 13:12:03.454448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48176 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48184 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48224 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48232 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48240 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48248 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48256 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48264 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48272 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.811 [2024-12-15 13:12:03.454965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48280 len:8 PRP1 0x0 PRP2 0x0 00:32:06.811 [2024-12-15 13:12:03.454971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.811 [2024-12-15 13:12:03.454977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.811 [2024-12-15 13:12:03.454982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.454987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48288 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.454993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48296 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48304 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48312 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48320 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48328 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.455124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.455129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48336 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.455135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.455141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48344 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.465848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48352 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.465879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48360 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.465910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48368 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.465941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48376 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.465974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.465981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.465988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48384 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.465996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47688 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47696 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47704 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47712 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.812 [2024-12-15 13:12:03.466162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.812 [2024-12-15 13:12:03.466169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47728 len:8 PRP1 0x0 PRP2 0x0 00:32:06.812 [2024-12-15 13:12:03.466178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466228] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:06.812 [2024-12-15 13:12:03.466256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.812 [2024-12-15 13:12:03.466266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.812 [2024-12-15 13:12:03.466285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.812 [2024-12-15 13:12:03.466303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.812 [2024-12-15 13:12:03.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:03.466329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:06.812 [2024-12-15 13:12:03.466366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2460 (9): Bad file descriptor 00:32:06.812 [2024-12-15 13:12:03.470111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:06.812 [2024-12-15 13:12:03.499580] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:06.812 11449.60 IOPS, 44.73 MiB/s [2024-12-15T12:12:14.719Z] 11478.33 IOPS, 44.84 MiB/s [2024-12-15T12:12:14.719Z] 11496.14 IOPS, 44.91 MiB/s [2024-12-15T12:12:14.719Z] 11493.88 IOPS, 44.90 MiB/s [2024-12-15T12:12:14.719Z] 11493.33 IOPS, 44.90 MiB/s [2024-12-15T12:12:14.719Z] [2024-12-15 13:12:07.885612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.812 [2024-12-15 13:12:07.885644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.812 [2024-12-15 13:12:07.885659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.812 [2024-12-15 13:12:07.885667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.885697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.885712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.885993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.885999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.813 [2024-12-15 13:12:07.886158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.886187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.813 [2024-12-15 13:12:07.886201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.813 [2024-12-15 13:12:07.886210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.814 [2024-12-15 13:12:07.886669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71320 len:8 PRP1 0x0 PRP2 0x0 00:32:06.814 [2024-12-15 13:12:07.886701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.814 [2024-12-15 13:12:07.886719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71328 len:8 PRP1 0x0 PRP2 0x0 00:32:06.814 [2024-12-15 13:12:07.886730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.814 [2024-12-15 13:12:07.886741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71336 len:8 PRP1 0x0 PRP2 0x0 00:32:06.814 [2024-12-15 13:12:07.886752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.814 [2024-12-15 13:12:07.886764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71344 len:8 PRP1 0x0 PRP2 0x0 00:32:06.814 [2024-12-15 13:12:07.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.814 [2024-12-15 13:12:07.886786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71352 len:8 PRP1 0x0 PRP2 0x0 00:32:06.814 [2024-12-15 13:12:07.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.814 [2024-12-15 13:12:07.886804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.814 [2024-12-15 13:12:07.886808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.814 [2024-12-15 13:12:07.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71360 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71368 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71376 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71384 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71392 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71400 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71408 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.886979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71416 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.886992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.886997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71424 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71432 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71440 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71448 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71456 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71464 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71472 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71480 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71488 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71496 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71504 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71512 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71520 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71528 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71536 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71544 len:8 PRP1 0x0 PRP2 0x0 00:32:06.815 [2024-12-15 13:12:07.887352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.815 [2024-12-15 13:12:07.887359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.815 [2024-12-15 13:12:07.887364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.815 [2024-12-15 13:12:07.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71552 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.887387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.887392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.887399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71560 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.887412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.887417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.887422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71568 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.887434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.887439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.887445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71576 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.887457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.887462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.887467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71584 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.887479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.887484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.887489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71592 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71600 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71608 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71616 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71624 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71632 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71640 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71648 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71656 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71664 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71672 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71680 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71688 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.901967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.901975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71696 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.901995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.902004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.902012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71704 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.902021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.902028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.902033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.902038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71712 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.902044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.902050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.902055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.902061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71720 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.902073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.902078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.902085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71728 len:8 PRP1 0x0 PRP2 0x0 00:32:06.816 [2024-12-15 13:12:07.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.816 [2024-12-15 13:12:07.902098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.816 [2024-12-15 13:12:07.902103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.816 [2024-12-15 13:12:07.902109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71736 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.817 [2024-12-15 13:12:07.902127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.817 [2024-12-15 13:12:07.902134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71744 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.817 [2024-12-15 13:12:07.902152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.817 [2024-12-15 13:12:07.902157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71752 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.817 [2024-12-15 13:12:07.902175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.817 [2024-12-15 13:12:07.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71760 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.817 [2024-12-15 13:12:07.902198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.817 [2024-12-15 13:12:07.902204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71768 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:06.817 [2024-12-15 13:12:07.902222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:06.817 [2024-12-15 13:12:07.902227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:32:06.817 [2024-12-15 13:12:07.902233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902278] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:06.817 [2024-12-15 13:12:07.902303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.817 [2024-12-15 13:12:07.902310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.817 [2024-12-15 13:12:07.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.817 [2024-12-15 13:12:07.902356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.817 [2024-12-15 13:12:07.902374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.817 [2024-12-15 13:12:07.902384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:06.817 [2024-12-15 13:12:07.902423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2460 (9): Bad file descriptor 00:32:06.817 [2024-12-15 13:12:07.906381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:06.817 [2024-12-15 13:12:08.090479] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:06.817 11259.50 IOPS, 43.98 MiB/s [2024-12-15T12:12:14.724Z] 11273.00 IOPS, 44.04 MiB/s [2024-12-15T12:12:14.724Z] 11309.75 IOPS, 44.18 MiB/s [2024-12-15T12:12:14.724Z] 11333.62 IOPS, 44.27 MiB/s [2024-12-15T12:12:14.724Z] 11337.00 IOPS, 44.29 MiB/s [2024-12-15T12:12:14.724Z] 11347.20 IOPS, 44.33 MiB/s 00:32:06.817 Latency(us) 00:32:06.817 [2024-12-15T12:12:14.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.817 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:06.817 Verification LBA range: start 0x0 length 0x4000 00:32:06.817 NVMe0n1 : 15.01 11347.26 44.33 789.08 0.00 10525.11 423.25 22719.15 00:32:06.817 [2024-12-15T12:12:14.724Z] =================================================================================================================== 00:32:06.817 [2024-12-15T12:12:14.724Z] Total : 11347.26 44.33 789.08 0.00 10525.11 423.25 22719.15 00:32:06.817 Received shutdown signal, test time was about 15.000000 seconds 00:32:06.817 00:32:06.817 Latency(us) 00:32:06.817 [2024-12-15T12:12:14.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.817 [2024-12-15T12:12:14.724Z] =================================================================================================================== 00:32:06.817 [2024-12-15T12:12:14.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1149326 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1149326 /var/tmp/bdevperf.sock 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1149326 ']' 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.817 [2024-12-15 13:12:14.508777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:06.817 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:07.076 [2024-12-15 13:12:14.705356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:07.076 13:12:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:07.337 NVMe0n1 00:32:07.337 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:07.596 00:32:07.853 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:08.110 00:32:08.110 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:08.110 13:12:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:08.368 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:08.368 13:12:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:11.650 13:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:11.650 13:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:11.650 13:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:11.650 13:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1150223 00:32:11.651 13:12:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1150223 00:32:13.024 { 00:32:13.024 "results": [ 00:32:13.024 { 00:32:13.024 "job": "NVMe0n1", 00:32:13.024 "core_mask": "0x1", 00:32:13.024 "workload": "verify", 00:32:13.024 "status": "finished", 00:32:13.024 "verify_range": { 00:32:13.024 "start": 0, 00:32:13.024 "length": 16384 00:32:13.024 }, 00:32:13.024 "queue_depth": 128, 00:32:13.024 "io_size": 4096, 00:32:13.024 "runtime": 1.004636, 00:32:13.024 "iops": 11442.950481567454, 00:32:13.024 "mibps": 44.699025318622866, 00:32:13.024 "io_failed": 0, 00:32:13.024 "io_timeout": 0, 00:32:13.024 "avg_latency_us": 11143.670413891374, 00:32:13.024 "min_latency_us": 1435.5504761904763, 00:32:13.024 "max_latency_us": 11609.234285714285 00:32:13.024 } 00:32:13.024 ], 00:32:13.024 "core_count": 1 00:32:13.024 } 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:13.024 [2024-12-15 13:12:14.140086] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:13.024 [2024-12-15 13:12:14.140140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149326 ] 00:32:13.024 [2024-12-15 13:12:14.213910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.024 [2024-12-15 13:12:14.233631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.024 [2024-12-15 13:12:16.189449] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:13.024 [2024-12-15 13:12:16.189492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.024 [2024-12-15 13:12:16.189503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.024 [2024-12-15 13:12:16.189511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.024 [2024-12-15 13:12:16.189519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.024 [2024-12-15 13:12:16.189526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.024 [2024-12-15 13:12:16.189533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.024 [2024-12-15 13:12:16.189539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.024 [2024-12-15 13:12:16.189546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.024 [2024-12-15 13:12:16.189553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:13.024 [2024-12-15 13:12:16.189578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:13.024 [2024-12-15 13:12:16.189592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1218460 (9): Bad file descriptor 00:32:13.024 [2024-12-15 13:12:16.200149] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:13.024 Running I/O for 1 seconds... 00:32:13.024 11368.00 IOPS, 44.41 MiB/s 00:32:13.024 Latency(us) 00:32:13.024 [2024-12-15T12:12:20.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.024 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:13.024 Verification LBA range: start 0x0 length 0x4000 00:32:13.024 NVMe0n1 : 1.00 11442.95 44.70 0.00 0.00 11143.67 1435.55 11609.23 00:32:13.024 [2024-12-15T12:12:20.931Z] =================================================================================================================== 00:32:13.024 [2024-12-15T12:12:20.931Z] Total : 11442.95 44.70 0.00 0.00 11143.67 1435.55 11609.23 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:13.024 13:12:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:13.282 13:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:13.541 13:12:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:16.822 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1149326 ']' 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149326' 00:32:16.823 killing process with pid 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1149326 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:16.823 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.081 rmmod nvme_tcp 00:32:17.081 rmmod nvme_fabrics 00:32:17.081 rmmod nvme_keyring 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1146076 ']' 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1146076 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146076 ']' 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146076 00:32:17.081 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:17.344 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.344 13:12:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146076 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146076' 00:32:17.344 killing process with pid 1146076 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146076 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146076 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.344 13:12:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.882 00:32:19.882 real 0m37.286s 00:32:19.882 user 1m58.233s 00:32:19.882 sys 0m7.784s 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:19.882 ************************************ 00:32:19.882 END TEST nvmf_failover 00:32:19.882 ************************************ 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.882 ************************************ 00:32:19.882 START TEST nvmf_host_discovery 00:32:19.882 ************************************ 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:19.882 * Looking for test storage... 00:32:19.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:19.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.882 --rc genhtml_branch_coverage=1 00:32:19.882 --rc genhtml_function_coverage=1 00:32:19.882 --rc genhtml_legend=1 00:32:19.882 --rc geninfo_all_blocks=1 00:32:19.882 --rc geninfo_unexecuted_blocks=1 00:32:19.882 00:32:19.882 ' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:19.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.882 --rc genhtml_branch_coverage=1 00:32:19.882 --rc genhtml_function_coverage=1 00:32:19.882 --rc genhtml_legend=1 00:32:19.882 --rc geninfo_all_blocks=1 00:32:19.882 --rc geninfo_unexecuted_blocks=1 00:32:19.882 00:32:19.882 ' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:19.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.882 --rc genhtml_branch_coverage=1 00:32:19.882 --rc genhtml_function_coverage=1 00:32:19.882 --rc genhtml_legend=1 00:32:19.882 --rc geninfo_all_blocks=1 00:32:19.882 --rc geninfo_unexecuted_blocks=1 00:32:19.882 00:32:19.882 ' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:19.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.882 --rc genhtml_branch_coverage=1 00:32:19.882 --rc genhtml_function_coverage=1 00:32:19.882 --rc genhtml_legend=1 00:32:19.882 --rc geninfo_all_blocks=1 00:32:19.882 --rc geninfo_unexecuted_blocks=1 00:32:19.882 00:32:19.882 ' 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.882 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:19.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.883 13:12:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.452 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:26.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:26.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:26.453 Found net devices under 0000:af:00.0: cvl_0_0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:26.453 Found net devices under 0000:af:00.1: cvl_0_1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:32:26.453 00:32:26.453 --- 10.0.0.2 ping statistics --- 00:32:26.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.453 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:26.453 00:32:26.453 --- 10.0.0.1 ping statistics --- 00:32:26.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.453 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1154586 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1154586 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154586 ']' 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.453 [2024-12-15 13:12:33.470178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:26.453 [2024-12-15 13:12:33.470221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.453 [2024-12-15 13:12:33.549746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.453 [2024-12-15 13:12:33.570685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.453 [2024-12-15 13:12:33.570721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.453 [2024-12-15 13:12:33.570728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.453 [2024-12-15 13:12:33.570734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.453 [2024-12-15 13:12:33.570739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.453 [2024-12-15 13:12:33.571227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:26.453 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 [2024-12-15 13:12:33.701599] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 [2024-12-15 13:12:33.713770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 null0 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 null1 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1154607 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1154607 /tmp/host.sock 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154607 ']' 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:26.454 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 [2024-12-15 13:12:33.790968] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:26.454 [2024-12-15 13:12:33.791011] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154607 ] 00:32:26.454 [2024-12-15 13:12:33.865625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.454 [2024-12-15 13:12:33.888536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.454 13:12:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.454 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.455 [2024-12-15 13:12:34.275189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.455 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.713 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:26.713 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:26.713 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:26.714 13:12:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:27.280 [2024-12-15 13:12:35.048012] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:27.280 [2024-12-15 13:12:35.048035] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:27.280 [2024-12-15 13:12:35.048050] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.280 [2024-12-15 13:12:35.134288] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:27.537 [2024-12-15 13:12:35.310207] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:27.537 [2024-12-15 13:12:35.310955] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b1c60:1 started. 00:32:27.537 [2024-12-15 13:12:35.312126] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:27.537 [2024-12-15 13:12:35.312145] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:27.537 [2024-12-15 13:12:35.316968] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b1c60 was disconnected and freed. delete nvme_qpair. 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.795 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.796 [2024-12-15 13:12:35.682618] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b1fe0:1 started. 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.796 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 [2024-12-15 13:12:35.728284] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b1fe0 was disconnected and freed. delete nvme_qpair. 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 [2024-12-15 13:12:35.783547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:28.056 [2024-12-15 13:12:35.784065] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:28.056 [2024-12-15 13:12:35.784084] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 [2024-12-15 13:12:35.870323] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:28.056 [2024-12-15 13:12:35.932802] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:28.056 [2024-12-15 13:12:35.932841] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:28.056 [2024-12-15 13:12:35.932849] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:28.056 [2024-12-15 13:12:35.932854] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:28.056 13:12:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.435 13:12:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.435 [2024-12-15 13:12:37.039934] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:29.435 [2024-12-15 13:12:37.039956] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:29.435 [2024-12-15 13:12:37.044661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:29.435 [2024-12-15 13:12:37.044678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.435 [2024-12-15 13:12:37.044690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.435 [2024-12-15 13:12:37.044697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.435 [2024-12-15 13:12:37.044705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.435 [2024-12-15 13:12:37.044712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.435 [2024-12-15 13:12:37.044719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.435 [2024-12-15 13:12:37.044725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.435 [2024-12-15 13:12:37.044732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:29.435 [2024-12-15 13:12:37.054673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.435 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.435 [2024-12-15 13:12:37.064709] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.435 [2024-12-15 13:12:37.064720] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.435 [2024-12-15 13:12:37.064727] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.435 [2024-12-15 13:12:37.064732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.435 [2024-12-15 13:12:37.064747] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.435 [2024-12-15 13:12:37.064914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.435 [2024-12-15 13:12:37.064929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.436 [2024-12-15 13:12:37.064937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.436 [2024-12-15 13:12:37.064948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.436 [2024-12-15 13:12:37.064958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.436 [2024-12-15 13:12:37.064965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.436 [2024-12-15 13:12:37.064973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.436 [2024-12-15 13:12:37.064980] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.436 [2024-12-15 13:12:37.064985] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.436 [2024-12-15 13:12:37.064990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.436 [2024-12-15 13:12:37.074777] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.436 [2024-12-15 13:12:37.074789] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.436 [2024-12-15 13:12:37.074793] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.074797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.436 [2024-12-15 13:12:37.074809] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.075065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.436 [2024-12-15 13:12:37.075078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.436 [2024-12-15 13:12:37.075085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.436 [2024-12-15 13:12:37.075095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.436 [2024-12-15 13:12:37.075112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.436 [2024-12-15 13:12:37.075119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.436 [2024-12-15 13:12:37.075130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.436 [2024-12-15 13:12:37.075136] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.436 [2024-12-15 13:12:37.075140] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.436 [2024-12-15 13:12:37.075144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.436 [2024-12-15 13:12:37.084841] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.436 [2024-12-15 13:12:37.084854] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.436 [2024-12-15 13:12:37.084858] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.084862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.436 [2024-12-15 13:12:37.084876] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.084982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.436 [2024-12-15 13:12:37.084994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.436 [2024-12-15 13:12:37.085001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.436 [2024-12-15 13:12:37.085011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.436 [2024-12-15 13:12:37.085020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.436 [2024-12-15 13:12:37.085026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.436 [2024-12-15 13:12:37.085033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.436 [2024-12-15 13:12:37.085038] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.436 [2024-12-15 13:12:37.085042] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.436 [2024-12-15 13:12:37.085046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:29.436 [2024-12-15 13:12:37.094906] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.436 [2024-12-15 13:12:37.094917] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.436 [2024-12-15 13:12:37.094922] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.094925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.436 [2024-12-15 13:12:37.094941] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.095040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.436 [2024-12-15 13:12:37.095051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.436 [2024-12-15 13:12:37.095057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.436 [2024-12-15 13:12:37.095067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.436 [2024-12-15 13:12:37.095076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.436 [2024-12-15 13:12:37.095081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.436 [2024-12-15 13:12:37.095088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.436 [2024-12-15 13:12:37.095093] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.436 [2024-12-15 13:12:37.095098] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.436 [2024-12-15 13:12:37.095102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.436 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.436 [2024-12-15 13:12:37.104972] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.436 [2024-12-15 13:12:37.104985] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.436 [2024-12-15 13:12:37.104989] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.104993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.436 [2024-12-15 13:12:37.105007] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.105188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.436 [2024-12-15 13:12:37.105209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.436 [2024-12-15 13:12:37.105216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.436 [2024-12-15 13:12:37.105227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.436 [2024-12-15 13:12:37.105251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.436 [2024-12-15 13:12:37.105258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.436 [2024-12-15 13:12:37.105265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.436 [2024-12-15 13:12:37.105271] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.436 [2024-12-15 13:12:37.105279] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.436 [2024-12-15 13:12:37.105284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.436 [2024-12-15 13:12:37.115038] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.436 [2024-12-15 13:12:37.115048] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.436 [2024-12-15 13:12:37.115052] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.115056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.436 [2024-12-15 13:12:37.115068] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.436 [2024-12-15 13:12:37.115227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.436 [2024-12-15 13:12:37.115238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.437 [2024-12-15 13:12:37.115245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.437 [2024-12-15 13:12:37.115254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.437 [2024-12-15 13:12:37.115263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.437 [2024-12-15 13:12:37.115270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.437 [2024-12-15 13:12:37.115276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.437 [2024-12-15 13:12:37.115281] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.437 [2024-12-15 13:12:37.115286] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.437 [2024-12-15 13:12:37.115289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.437 [2024-12-15 13:12:37.125099] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:29.437 [2024-12-15 13:12:37.125109] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:29.437 [2024-12-15 13:12:37.125113] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:29.437 [2024-12-15 13:12:37.125117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:29.437 [2024-12-15 13:12:37.125129] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:29.437 [2024-12-15 13:12:37.125297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:29.437 [2024-12-15 13:12:37.125307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2183d70 with addr=10.0.0.2, port=4420 00:32:29.437 [2024-12-15 13:12:37.125314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183d70 is same with the state(6) to be set 00:32:29.437 [2024-12-15 13:12:37.125324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2183d70 (9): Bad file descriptor 00:32:29.437 [2024-12-15 13:12:37.125338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:29.437 [2024-12-15 13:12:37.125345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:29.437 [2024-12-15 13:12:37.125351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:29.437 [2024-12-15 13:12:37.125356] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:29.437 [2024-12-15 13:12:37.125363] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:29.437 [2024-12-15 13:12:37.125367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:29.437 [2024-12-15 13:12:37.126856] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:29.437 [2024-12-15 13:12:37.126874] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.437 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.696 13:12:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.631 [2024-12-15 13:12:38.456969] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:30.631 [2024-12-15 13:12:38.456986] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:30.631 [2024-12-15 13:12:38.456998] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:30.891 [2024-12-15 13:12:38.543247] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:30.891 [2024-12-15 13:12:38.641874] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:30.891 [2024-12-15 13:12:38.642386] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x21bdd10:1 started. 00:32:30.891 [2024-12-15 13:12:38.643960] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:30.891 [2024-12-15 13:12:38.643985] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.891 [2024-12-15 13:12:38.645526] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x21bdd10 was disconnected and freed. delete nvme_qpair. 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.891 request: 00:32:30.891 { 00:32:30.891 "name": "nvme", 00:32:30.891 "trtype": "tcp", 00:32:30.891 "traddr": "10.0.0.2", 00:32:30.891 "adrfam": "ipv4", 00:32:30.891 "trsvcid": "8009", 00:32:30.891 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:30.891 "wait_for_attach": true, 00:32:30.891 "method": "bdev_nvme_start_discovery", 00:32:30.891 "req_id": 1 00:32:30.891 } 00:32:30.891 Got JSON-RPC error response 00:32:30.891 response: 00:32:30.891 { 00:32:30.891 "code": -17, 00:32:30.891 "message": "File exists" 00:32:30.891 } 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.891 request: 00:32:30.891 { 00:32:30.891 "name": "nvme_second", 00:32:30.891 "trtype": "tcp", 00:32:30.891 "traddr": "10.0.0.2", 00:32:30.891 "adrfam": "ipv4", 00:32:30.891 "trsvcid": "8009", 00:32:30.891 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:30.891 "wait_for_attach": true, 00:32:30.891 "method": "bdev_nvme_start_discovery", 00:32:30.891 "req_id": 1 00:32:30.891 } 00:32:30.891 Got JSON-RPC error response 00:32:30.891 response: 00:32:30.891 { 00:32:30.891 "code": -17, 00:32:30.891 "message": "File exists" 00:32:30.891 } 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:30.891 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:31.150 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.151 13:12:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.085 [2024-12-15 13:12:39.883729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:32.085 [2024-12-15 13:12:39.883757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b8620 with addr=10.0.0.2, port=8010 00:32:32.085 [2024-12-15 13:12:39.883769] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:32.085 [2024-12-15 13:12:39.883776] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:32.085 [2024-12-15 13:12:39.883782] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:33.021 [2024-12-15 13:12:40.886313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.021 [2024-12-15 13:12:40.886341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b8620 with addr=10.0.0.2, port=8010 00:32:33.021 [2024-12-15 13:12:40.886357] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:33.021 [2024-12-15 13:12:40.886364] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:33.021 [2024-12-15 13:12:40.886370] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:34.399 [2024-12-15 13:12:41.888453] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:34.399 request: 00:32:34.399 { 00:32:34.399 "name": "nvme_second", 00:32:34.399 "trtype": "tcp", 00:32:34.399 "traddr": "10.0.0.2", 00:32:34.399 "adrfam": "ipv4", 00:32:34.399 "trsvcid": "8010", 00:32:34.399 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:34.399 "wait_for_attach": false, 00:32:34.399 "attach_timeout_ms": 3000, 00:32:34.399 "method": "bdev_nvme_start_discovery", 00:32:34.399 "req_id": 1 00:32:34.399 } 00:32:34.399 Got JSON-RPC error response 00:32:34.399 response: 00:32:34.399 { 00:32:34.399 "code": -110, 00:32:34.399 "message": "Connection timed out" 00:32:34.399 } 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1154607 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.399 13:12:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.399 rmmod nvme_tcp 00:32:34.399 rmmod nvme_fabrics 00:32:34.400 rmmod nvme_keyring 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1154586 ']' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1154586 ']' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154586' 00:32:34.400 killing process with pid 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1154586 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.400 13:12:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.937 00:32:36.937 real 0m16.928s 00:32:36.937 user 0m20.208s 00:32:36.937 sys 0m5.648s 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.937 ************************************ 00:32:36.937 END TEST nvmf_host_discovery 00:32:36.937 ************************************ 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.937 13:12:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.937 ************************************ 00:32:36.938 START TEST nvmf_host_multipath_status 00:32:36.938 ************************************ 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:36.938 * Looking for test storage... 00:32:36.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:36.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.938 --rc genhtml_branch_coverage=1 00:32:36.938 --rc genhtml_function_coverage=1 00:32:36.938 --rc genhtml_legend=1 00:32:36.938 --rc geninfo_all_blocks=1 00:32:36.938 --rc geninfo_unexecuted_blocks=1 00:32:36.938 00:32:36.938 ' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:36.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.938 --rc genhtml_branch_coverage=1 00:32:36.938 --rc genhtml_function_coverage=1 00:32:36.938 --rc genhtml_legend=1 00:32:36.938 --rc geninfo_all_blocks=1 00:32:36.938 --rc geninfo_unexecuted_blocks=1 00:32:36.938 00:32:36.938 ' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:36.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.938 --rc genhtml_branch_coverage=1 00:32:36.938 --rc genhtml_function_coverage=1 00:32:36.938 --rc genhtml_legend=1 00:32:36.938 --rc geninfo_all_blocks=1 00:32:36.938 --rc geninfo_unexecuted_blocks=1 00:32:36.938 00:32:36.938 ' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:36.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.938 --rc genhtml_branch_coverage=1 00:32:36.938 --rc genhtml_function_coverage=1 00:32:36.938 --rc genhtml_legend=1 00:32:36.938 --rc geninfo_all_blocks=1 00:32:36.938 --rc geninfo_unexecuted_blocks=1 00:32:36.938 00:32:36.938 ' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:36.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:36.938 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.939 13:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:42.399 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:42.399 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:42.399 Found net devices under 0000:af:00.0: cvl_0_0 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:42.399 Found net devices under 0000:af:00.1: cvl_0_1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.399 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:32:42.659 00:32:42.659 --- 10.0.0.2 ping statistics --- 00:32:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.659 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:32:42.659 00:32:42.659 --- 10.0.0.1 ping statistics --- 00:32:42.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.659 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1159587 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1159587 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159587 ']' 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.659 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.659 [2024-12-15 13:12:50.525533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:42.659 [2024-12-15 13:12:50.525577] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.918 [2024-12-15 13:12:50.604371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:42.918 [2024-12-15 13:12:50.626058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.918 [2024-12-15 13:12:50.626096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.918 [2024-12-15 13:12:50.626103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.918 [2024-12-15 13:12:50.626109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.918 [2024-12-15 13:12:50.626114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.918 [2024-12-15 13:12:50.627235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.918 [2024-12-15 13:12:50.627236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1159587 00:32:42.918 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:43.177 [2024-12-15 13:12:50.918552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.177 13:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:43.436 Malloc0 00:32:43.436 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:43.695 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.695 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.954 [2024-12-15 13:12:51.741532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.954 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:44.213 [2024-12-15 13:12:51.937968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1159836 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1159836 /var/tmp/bdevperf.sock 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159836 ']' 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:44.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.213 13:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:44.472 13:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.472 13:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:44.472 13:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:44.731 13:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:44.990 Nvme0n1 00:32:44.990 13:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:45.556 Nvme0n1 00:32:45.556 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:45.556 13:12:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:47.459 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:47.459 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:47.718 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:47.718 13:12:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.096 13:12:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.355 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.355 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.355 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.355 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.614 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:49.873 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.873 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:49.873 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.873 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.132 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.132 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:50.133 13:12:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:50.391 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:50.391 13:12:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.769 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:52.027 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.027 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:52.027 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.027 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:52.028 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.028 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:52.028 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.028 13:12:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:52.286 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.286 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:52.286 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.286 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:52.544 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.545 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:52.545 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.545 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.803 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.803 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:52.803 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:53.062 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:53.062 13:13:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:54.440 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:54.440 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:54.440 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:54.440 13:13:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.440 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.440 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:54.440 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.440 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.699 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.958 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.959 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:54.959 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.959 13:13:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:55.218 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.218 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:55.218 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.218 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.476 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.476 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:55.476 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:55.736 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:55.736 13:13:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.113 13:13:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.371 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.629 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.629 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.629 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:57.629 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.887 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.887 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:57.887 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:57.887 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.146 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.146 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:58.146 13:13:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:58.405 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:58.664 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:59.600 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:59.600 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:59.600 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.600 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.859 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.118 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.118 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:00.118 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.118 13:13:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:00.376 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.376 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:00.376 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.376 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:00.634 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:00.895 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:01.155 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:02.091 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:02.091 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:02.091 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.091 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.350 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.350 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:02.350 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.350 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.609 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.609 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.609 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.609 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.868 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.127 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.386 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.386 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:03.645 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:03.645 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:03.903 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:03.903 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:05.281 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:05.281 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.281 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.281 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.281 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.281 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:05.281 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.281 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.540 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.799 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.799 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.799 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.799 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.058 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.058 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:06.058 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.058 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.318 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.318 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:06.318 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:06.577 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:06.577 13:13:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.955 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.214 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.214 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.214 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.214 13:13:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.214 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.214 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.214 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.214 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.473 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.473 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:08.473 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.473 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.731 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.731 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:08.731 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.732 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.991 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.991 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:08.991 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:09.249 13:13:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:09.249 13:13:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.627 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.886 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.145 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.146 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:11.146 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.146 13:13:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.404 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.404 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.405 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.405 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.663 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.663 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:11.663 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:11.922 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:11.922 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:13.300 13:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:13.300 13:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.300 13:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.300 13:13:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.300 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.300 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.300 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.300 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.559 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.818 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.818 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:13.818 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.818 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.077 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.077 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:14.077 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.077 13:13:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1159836 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159836 ']' 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159836 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159836 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159836' 00:33:14.336 killing process with pid 1159836 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159836 00:33:14.336 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159836 00:33:14.336 { 00:33:14.336 "results": [ 00:33:14.336 { 00:33:14.336 "job": "Nvme0n1", 00:33:14.336 "core_mask": "0x4", 00:33:14.336 "workload": "verify", 00:33:14.336 "status": "terminated", 00:33:14.336 "verify_range": { 00:33:14.336 "start": 0, 00:33:14.336 "length": 16384 00:33:14.336 }, 00:33:14.336 "queue_depth": 128, 00:33:14.336 "io_size": 4096, 00:33:14.336 "runtime": 28.790974, 00:33:14.336 "iops": 10664.592312854717, 00:33:14.336 "mibps": 41.658563722088736, 00:33:14.336 "io_failed": 0, 00:33:14.336 "io_timeout": 0, 00:33:14.337 "avg_latency_us": 11981.558378765008, 00:33:14.337 "min_latency_us": 1014.2476190476191, 00:33:14.337 "max_latency_us": 3083812.083809524 00:33:14.337 } 00:33:14.337 ], 00:33:14.337 "core_count": 1 00:33:14.337 } 00:33:14.620 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1159836 00:33:14.620 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:14.620 [2024-12-15 13:12:52.013660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:14.620 [2024-12-15 13:12:52.013711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159836 ] 00:33:14.620 [2024-12-15 13:12:52.088348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.620 [2024-12-15 13:12:52.110816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.620 Running I/O for 90 seconds... 00:33:14.620 11619.00 IOPS, 45.39 MiB/s [2024-12-15T12:13:22.527Z] 11569.00 IOPS, 45.19 MiB/s [2024-12-15T12:13:22.527Z] 11530.33 IOPS, 45.04 MiB/s [2024-12-15T12:13:22.527Z] 11523.25 IOPS, 45.01 MiB/s [2024-12-15T12:13:22.527Z] 11541.00 IOPS, 45.08 MiB/s [2024-12-15T12:13:22.527Z] 11518.67 IOPS, 44.99 MiB/s [2024-12-15T12:13:22.527Z] 11510.57 IOPS, 44.96 MiB/s [2024-12-15T12:13:22.527Z] 11518.50 IOPS, 44.99 MiB/s [2024-12-15T12:13:22.527Z] 11534.67 IOPS, 45.06 MiB/s [2024-12-15T12:13:22.527Z] 11538.30 IOPS, 45.07 MiB/s [2024-12-15T12:13:22.527Z] 11548.36 IOPS, 45.11 MiB/s [2024-12-15T12:13:22.527Z] 11553.75 IOPS, 45.13 MiB/s [2024-12-15T12:13:22.527Z] [2024-12-15 13:13:06.111179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.620 [2024-12-15 13:13:06.111218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.620 [2024-12-15 13:13:06.111243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.620 [2024-12-15 13:13:06.111254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.620 [2024-12-15 13:13:06.111269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.620 [2024-12-15 13:13:06.111277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.111857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.111966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.112293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.621 [2024-12-15 13:13:06.112306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.112321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.112330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.112342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.112349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.621 [2024-12-15 13:13:06.112361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.621 [2024-12-15 13:13:06.112368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.112980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.112987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.622 [2024-12-15 13:13:06.113535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.622 [2024-12-15 13:13:06.113542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.113981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.113987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.623 [2024-12-15 13:13:06.114637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.623 [2024-12-15 13:13:06.114651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.624 [2024-12-15 13:13:06.114856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.114985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.114992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.624 [2024-12-15 13:13:06.115691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.624 [2024-12-15 13:13:06.115706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.625 [2024-12-15 13:13:06.115882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.115991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.116009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.116016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.116028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.116035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.116047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.126770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.126779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.625 [2024-12-15 13:13:06.127340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.625 [2024-12-15 13:13:06.127357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.127989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.127998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.626 [2024-12-15 13:13:06.128355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.626 [2024-12-15 13:13:06.128371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.128975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.128985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.627 [2024-12-15 13:13:06.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.627 [2024-12-15 13:13:06.129342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.627 [2024-12-15 13:13:06.129351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.129680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.129689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.130982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.130991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.628 [2024-12-15 13:13:06.131207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.628 [2024-12-15 13:13:06.131432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.628 [2024-12-15 13:13:06.131442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.131976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.131986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.629 [2024-12-15 13:13:06.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.629 [2024-12-15 13:13:06.132711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.132989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.133604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.133613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.134360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.134390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.134416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.134442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.630 [2024-12-15 13:13:06.134469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.630 [2024-12-15 13:13:06.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.631 [2024-12-15 13:13:06.134681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.134981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.134998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.135007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.135024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.135032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.135049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.135074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.631 [2024-12-15 13:13:06.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.631 [2024-12-15 13:13:06.140490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.632 [2024-12-15 13:13:06.140651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.140950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.140958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.141978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.141993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.142002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.142021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.142030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.142044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.142053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.632 [2024-12-15 13:13:06.142068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.632 [2024-12-15 13:13:06.142076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.142980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.633 [2024-12-15 13:13:06.142994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.633 [2024-12-15 13:13:06.143003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.143978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.143992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.634 [2024-12-15 13:13:06.144046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.634 [2024-12-15 13:13:06.144427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.634 [2024-12-15 13:13:06.144443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.635 [2024-12-15 13:13:06.144918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.144990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.145981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.145996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.635 [2024-12-15 13:13:06.146004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.635 [2024-12-15 13:13:06.146019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.146979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.146987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.636 [2024-12-15 13:13:06.147004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.636 [2024-12-15 13:13:06.147014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.147399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.147408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.637 [2024-12-15 13:13:06.148452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.637 [2024-12-15 13:13:06.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.637 [2024-12-15 13:13:06.148674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.148978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.148994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.638 [2024-12-15 13:13:06.149380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.638 [2024-12-15 13:13:06.149560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.638 [2024-12-15 13:13:06.149569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.149595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.149610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.149619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.149635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.149660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.149669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.150977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.150993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.639 [2024-12-15 13:13:06.151218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.639 [2024-12-15 13:13:06.151227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.151936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.640 [2024-12-15 13:13:06.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.640 [2024-12-15 13:13:06.152920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.152945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.152960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.152969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.152985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.152994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.641 [2024-12-15 13:13:06.153770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.641 [2024-12-15 13:13:06.153915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.641 [2024-12-15 13:13:06.153925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.153941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.642 [2024-12-15 13:13:06.153950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.153966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.153975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.153990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.154980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.154996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.642 [2024-12-15 13:13:06.155470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.642 [2024-12-15 13:13:06.155479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.155982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.155998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.156453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.643 [2024-12-15 13:13:06.156462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.643 [2024-12-15 13:13:06.157045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.644 [2024-12-15 13:13:06.157390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.644 [2024-12-15 13:13:06.157868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.644 [2024-12-15 13:13:06.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.157987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.157995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.645 [2024-12-15 13:13:06.158135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.158991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.158998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.645 [2024-12-15 13:13:06.159196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.645 [2024-12-15 13:13:06.159208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.159991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.646 [2024-12-15 13:13:06.160011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.646 [2024-12-15 13:13:06.160019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.160983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.160995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.647 [2024-12-15 13:13:06.161061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.647 [2024-12-15 13:13:06.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.647 [2024-12-15 13:13:06.161280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.648 [2024-12-15 13:13:06.161797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.161941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.161954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.165479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.166033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.648 [2024-12-15 13:13:06.166049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.648 [2024-12-15 13:13:06.166063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.649 [2024-12-15 13:13:06.166847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.649 [2024-12-15 13:13:06.166854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.166985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.166991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.167978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.167994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.650 [2024-12-15 13:13:06.168192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.650 [2024-12-15 13:13:06.168199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.651 [2024-12-15 13:13:06.168240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.651 [2024-12-15 13:13:06.168840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.651 [2024-12-15 13:13:06.168860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.651 [2024-12-15 13:13:06.168873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.651 [2024-12-15 13:13:06.168881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.652 [2024-12-15 13:13:06.168901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.652 [2024-12-15 13:13:06.168921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.652 [2024-12-15 13:13:06.168940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.652 [2024-12-15 13:13:06.168960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.652 [2024-12-15 13:13:06.168984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.168996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.169990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.169997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.652 [2024-12-15 13:13:06.170188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.652 [2024-12-15 13:13:06.170196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.170912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.170920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.171474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.171488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.171502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.171510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.171525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.171532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.171545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.653 [2024-12-15 13:13:06.171552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.653 [2024-12-15 13:13:06.171565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.654 [2024-12-15 13:13:06.171878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.171982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.171994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.654 [2024-12-15 13:13:06.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.654 [2024-12-15 13:13:06.172355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.655 [2024-12-15 13:13:06.172612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.172725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.172732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.655 [2024-12-15 13:13:06.173675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.655 [2024-12-15 13:13:06.173688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.173990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.173997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.656 [2024-12-15 13:13:06.174381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.656 [2024-12-15 13:13:06.174393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.174532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.174539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.657 [2024-12-15 13:13:06.175490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.657 [2024-12-15 13:13:06.175707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:14.657 [2024-12-15 13:13:06.175719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.175991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.658 [2024-12-15 13:13:06.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.176980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.176997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.177005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.177021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.177028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.177043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.658 [2024-12-15 13:13:06.177053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.658 [2024-12-15 13:13:06.177069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.177982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.177999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.178007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.178024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.178032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.178054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.659 [2024-12-15 13:13:06.178061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.659 [2024-12-15 13:13:06.178078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:06.178443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:06.178451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:14.660 11389.08 IOPS, 44.49 MiB/s [2024-12-15T12:13:22.567Z] 10575.57 IOPS, 41.31 MiB/s [2024-12-15T12:13:22.567Z] 9870.53 IOPS, 38.56 MiB/s [2024-12-15T12:13:22.567Z] 9351.75 IOPS, 36.53 MiB/s [2024-12-15T12:13:22.567Z] 9478.71 IOPS, 37.03 MiB/s [2024-12-15T12:13:22.567Z] 9591.06 IOPS, 37.47 MiB/s [2024-12-15T12:13:22.567Z] 9758.74 IOPS, 38.12 MiB/s [2024-12-15T12:13:22.567Z] 9948.45 IOPS, 38.86 MiB/s [2024-12-15T12:13:22.567Z] 10121.95 IOPS, 39.54 MiB/s [2024-12-15T12:13:22.567Z] 10177.77 IOPS, 39.76 MiB/s [2024-12-15T12:13:22.567Z] 10232.78 IOPS, 39.97 MiB/s [2024-12-15T12:13:22.567Z] 10301.17 IOPS, 40.24 MiB/s [2024-12-15T12:13:22.567Z] 10425.36 IOPS, 40.72 MiB/s [2024-12-15T12:13:22.567Z] 10544.62 IOPS, 41.19 MiB/s [2024-12-15T12:13:22.567Z] [2024-12-15 13:13:19.790713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.660 [2024-12-15 13:13:19.790981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.790993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.790999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.791021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.791041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.660 [2024-12-15 13:13:19.791105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:14.660 [2024-12-15 13:13:19.791119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.661 [2024-12-15 13:13:19.791533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.661 [2024-12-15 13:13:19.791830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.661 [2024-12-15 13:13:19.791850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.661 [2024-12-15 13:13:19.791868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:14.661 [2024-12-15 13:13:19.791975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.661 [2024-12-15 13:13:19.791982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.791993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.662 [2024-12-15 13:13:19.792001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.662 [2024-12-15 13:13:19.792022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.662 [2024-12-15 13:13:19.792041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:14.662 [2024-12-15 13:13:19.792666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:14.662 [2024-12-15 13:13:19.792673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.662 10612.19 IOPS, 41.45 MiB/s [2024-12-15T12:13:22.569Z] 10644.36 IOPS, 41.58 MiB/s [2024-12-15T12:13:22.569Z] Received shutdown signal, test time was about 28.791597 seconds 00:33:14.662 00:33:14.662 Latency(us) 00:33:14.662 [2024-12-15T12:13:22.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.662 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:14.662 Verification LBA range: start 0x0 length 0x4000 00:33:14.662 Nvme0n1 : 28.79 10664.59 41.66 0.00 0.00 11981.56 1014.25 3083812.08 00:33:14.662 [2024-12-15T12:13:22.569Z] =================================================================================================================== 00:33:14.662 [2024-12-15T12:13:22.569Z] Total : 10664.59 41.66 0.00 0.00 11981.56 1014.25 3083812.08 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.662 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.921 rmmod nvme_tcp 00:33:14.921 rmmod nvme_fabrics 00:33:14.921 rmmod nvme_keyring 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1159587 ']' 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1159587 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159587 ']' 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159587 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159587 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159587' 00:33:14.921 killing process with pid 1159587 00:33:14.921 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159587 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159587 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.922 13:13:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.456 00:33:17.456 real 0m40.508s 00:33:17.456 user 1m49.923s 00:33:17.456 sys 0m11.446s 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:17.456 ************************************ 00:33:17.456 END TEST nvmf_host_multipath_status 00:33:17.456 ************************************ 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.456 ************************************ 00:33:17.456 START TEST nvmf_discovery_remove_ifc 00:33:17.456 ************************************ 00:33:17.456 13:13:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:17.456 * Looking for test storage... 00:33:17.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:17.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.456 --rc genhtml_branch_coverage=1 00:33:17.456 --rc genhtml_function_coverage=1 00:33:17.456 --rc genhtml_legend=1 00:33:17.456 --rc geninfo_all_blocks=1 00:33:17.456 --rc geninfo_unexecuted_blocks=1 00:33:17.456 00:33:17.456 ' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:17.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.456 --rc genhtml_branch_coverage=1 00:33:17.456 --rc genhtml_function_coverage=1 00:33:17.456 --rc genhtml_legend=1 00:33:17.456 --rc geninfo_all_blocks=1 00:33:17.456 --rc geninfo_unexecuted_blocks=1 00:33:17.456 00:33:17.456 ' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:17.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.456 --rc genhtml_branch_coverage=1 00:33:17.456 --rc genhtml_function_coverage=1 00:33:17.456 --rc genhtml_legend=1 00:33:17.456 --rc geninfo_all_blocks=1 00:33:17.456 --rc geninfo_unexecuted_blocks=1 00:33:17.456 00:33:17.456 ' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:17.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.456 --rc genhtml_branch_coverage=1 00:33:17.456 --rc genhtml_function_coverage=1 00:33:17.456 --rc genhtml_legend=1 00:33:17.456 --rc geninfo_all_blocks=1 00:33:17.456 --rc geninfo_unexecuted_blocks=1 00:33:17.456 00:33:17.456 ' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.456 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.457 13:13:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.091 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:24.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:24.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:24.092 Found net devices under 0000:af:00.0: cvl_0_0 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:24.092 Found net devices under 0000:af:00.1: cvl_0_1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:33:24.092 00:33:24.092 --- 10.0.0.2 ping statistics --- 00:33:24.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.092 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:33:24.092 00:33:24.092 --- 10.0.0.1 ping statistics --- 00:33:24.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.092 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1168176 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1168176 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:24.092 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168176 ']' 00:33:24.093 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.093 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.093 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.093 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.093 13:13:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 [2024-12-15 13:13:31.047047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:24.093 [2024-12-15 13:13:31.047102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.093 [2024-12-15 13:13:31.124084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.093 [2024-12-15 13:13:31.145374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.093 [2024-12-15 13:13:31.145411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.093 [2024-12-15 13:13:31.145421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.093 [2024-12-15 13:13:31.145427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.093 [2024-12-15 13:13:31.145432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.093 [2024-12-15 13:13:31.145948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 [2024-12-15 13:13:31.285211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.093 [2024-12-15 13:13:31.293367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:24.093 null0 00:33:24.093 [2024-12-15 13:13:31.325365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1168289 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1168289 /tmp/host.sock 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168289 ']' 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:24.093 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 [2024-12-15 13:13:31.393095] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:24.093 [2024-12-15 13:13:31.393137] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168289 ] 00:33:24.093 [2024-12-15 13:13:31.466843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.093 [2024-12-15 13:13:31.489715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.093 13:13:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.042 [2024-12-15 13:13:32.641287] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:25.042 [2024-12-15 13:13:32.641308] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:25.042 [2024-12-15 13:13:32.641320] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:25.042 [2024-12-15 13:13:32.727576] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:25.042 [2024-12-15 13:13:32.782060] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:25.042 [2024-12-15 13:13:32.782729] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a1c710:1 started. 00:33:25.042 [2024-12-15 13:13:32.784016] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:25.042 [2024-12-15 13:13:32.784056] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:25.042 [2024-12-15 13:13:32.784074] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:25.042 [2024-12-15 13:13:32.784086] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:25.042 [2024-12-15 13:13:32.784105] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.042 [2024-12-15 13:13:32.790497] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a1c710 was disconnected and freed. delete nvme_qpair. 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.042 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.301 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:25.301 13:13:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.238 13:13:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.238 13:13:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:26.238 13:13:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:27.174 13:13:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:28.561 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:28.561 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:28.562 13:13:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:29.498 13:13:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:30.436 13:13:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:30.436 [2024-12-15 13:13:38.225721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:30.436 [2024-12-15 13:13:38.225760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.436 [2024-12-15 13:13:38.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.436 [2024-12-15 13:13:38.225782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.436 [2024-12-15 13:13:38.225789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.436 [2024-12-15 13:13:38.225796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.436 [2024-12-15 13:13:38.225803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.436 [2024-12-15 13:13:38.225811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.436 [2024-12-15 13:13:38.225817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.436 [2024-12-15 13:13:38.225830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:30.436 [2024-12-15 13:13:38.225837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.436 [2024-12-15 13:13:38.225844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ec0 is same with the state(6) to be set 00:33:30.436 [2024-12-15 13:13:38.235742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8ec0 (9): Bad file descriptor 00:33:30.436 [2024-12-15 13:13:38.245779] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:30.436 [2024-12-15 13:13:38.245794] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:30.436 [2024-12-15 13:13:38.245800] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:30.436 [2024-12-15 13:13:38.245806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:30.436 [2024-12-15 13:13:38.245831] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.373 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.632 [2024-12-15 13:13:39.295924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:31.632 [2024-12-15 13:13:39.296004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f8ec0 with addr=10.0.0.2, port=4420 00:33:31.632 [2024-12-15 13:13:39.296037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ec0 is same with the state(6) to be set 00:33:31.632 [2024-12-15 13:13:39.296089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f8ec0 (9): Bad file descriptor 00:33:31.632 [2024-12-15 13:13:39.297038] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:31.632 [2024-12-15 13:13:39.297102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:31.632 [2024-12-15 13:13:39.297126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:31.632 [2024-12-15 13:13:39.297150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:31.632 [2024-12-15 13:13:39.297171] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:31.632 [2024-12-15 13:13:39.297187] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:31.632 [2024-12-15 13:13:39.297200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:31.632 [2024-12-15 13:13:39.297223] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:31.632 [2024-12-15 13:13:39.297236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:31.632 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.632 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:31.632 13:13:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:32.567 [2024-12-15 13:13:40.299747] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:32.567 [2024-12-15 13:13:40.299768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:32.567 [2024-12-15 13:13:40.299781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:32.567 [2024-12-15 13:13:40.299788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:32.567 [2024-12-15 13:13:40.299801] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:32.567 [2024-12-15 13:13:40.299808] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:32.567 [2024-12-15 13:13:40.299813] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:32.567 [2024-12-15 13:13:40.299817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:32.567 [2024-12-15 13:13:40.299843] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:32.567 [2024-12-15 13:13:40.299867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.567 [2024-12-15 13:13:40.299877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.567 [2024-12-15 13:13:40.299888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.567 [2024-12-15 13:13:40.299895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.567 [2024-12-15 13:13:40.299902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.567 [2024-12-15 13:13:40.299909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.567 [2024-12-15 13:13:40.299917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.567 [2024-12-15 13:13:40.299925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.567 [2024-12-15 13:13:40.299933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.567 [2024-12-15 13:13:40.299940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.567 [2024-12-15 13:13:40.299949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:32.567 [2024-12-15 13:13:40.300248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e85e0 (9): Bad file descriptor 00:33:32.567 [2024-12-15 13:13:40.301259] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:32.567 [2024-12-15 13:13:40.301270] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.567 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:32.827 13:13:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:33.763 13:13:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.699 [2024-12-15 13:13:42.315978] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:34.699 [2024-12-15 13:13:42.315995] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:34.699 [2024-12-15 13:13:42.316007] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:34.699 [2024-12-15 13:13:42.404269] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:34.699 [2024-12-15 13:13:42.465711] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:34.699 [2024-12-15 13:13:42.466327] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x19f9260:1 started. 00:33:34.699 [2024-12-15 13:13:42.467338] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:34.699 [2024-12-15 13:13:42.467369] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:34.699 [2024-12-15 13:13:42.467385] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:34.699 [2024-12-15 13:13:42.467398] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:34.699 [2024-12-15 13:13:42.467406] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:34.699 [2024-12-15 13:13:42.515356] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x19f9260 was disconnected and freed. delete nvme_qpair. 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1168289 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168289 ']' 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168289 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:34.958 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168289 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168289' 00:33:34.959 killing process with pid 1168289 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168289 00:33:34.959 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168289 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.218 rmmod nvme_tcp 00:33:35.218 rmmod nvme_fabrics 00:33:35.218 rmmod nvme_keyring 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1168176 ']' 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1168176 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168176 ']' 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168176 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.218 13:13:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168176 00:33:35.218 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:35.218 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:35.218 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168176' 00:33:35.218 killing process with pid 1168176 00:33:35.218 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168176 00:33:35.218 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168176 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.477 13:13:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.384 13:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.384 00:33:37.384 real 0m20.311s 00:33:37.384 user 0m24.605s 00:33:37.384 sys 0m5.702s 00:33:37.384 13:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.384 13:13:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.384 ************************************ 00:33:37.384 END TEST nvmf_discovery_remove_ifc 00:33:37.384 ************************************ 00:33:37.643 13:13:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:37.643 13:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:37.643 13:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.643 13:13:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.643 ************************************ 00:33:37.644 START TEST nvmf_identify_kernel_target 00:33:37.644 ************************************ 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:37.644 * Looking for test storage... 00:33:37.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:37.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.644 --rc genhtml_branch_coverage=1 00:33:37.644 --rc genhtml_function_coverage=1 00:33:37.644 --rc genhtml_legend=1 00:33:37.644 --rc geninfo_all_blocks=1 00:33:37.644 --rc geninfo_unexecuted_blocks=1 00:33:37.644 00:33:37.644 ' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:37.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.644 --rc genhtml_branch_coverage=1 00:33:37.644 --rc genhtml_function_coverage=1 00:33:37.644 --rc genhtml_legend=1 00:33:37.644 --rc geninfo_all_blocks=1 00:33:37.644 --rc geninfo_unexecuted_blocks=1 00:33:37.644 00:33:37.644 ' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:37.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.644 --rc genhtml_branch_coverage=1 00:33:37.644 --rc genhtml_function_coverage=1 00:33:37.644 --rc genhtml_legend=1 00:33:37.644 --rc geninfo_all_blocks=1 00:33:37.644 --rc geninfo_unexecuted_blocks=1 00:33:37.644 00:33:37.644 ' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:37.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.644 --rc genhtml_branch_coverage=1 00:33:37.644 --rc genhtml_function_coverage=1 00:33:37.644 --rc genhtml_legend=1 00:33:37.644 --rc geninfo_all_blocks=1 00:33:37.644 --rc geninfo_unexecuted_blocks=1 00:33:37.644 00:33:37.644 ' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:37.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.644 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.645 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.903 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:37.903 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:37.903 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.903 13:13:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.177 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:43.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:43.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:43.437 Found net devices under 0000:af:00.0: cvl_0_0 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.437 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:43.438 Found net devices under 0000:af:00.1: cvl_0_1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:33:43.438 00:33:43.438 --- 10.0.0.2 ping statistics --- 00:33:43.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.438 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:33:43.438 00:33:43.438 --- 10.0.0.1 ping statistics --- 00:33:43.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.438 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.438 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:43.698 13:13:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:46.235 Waiting for block devices as requested 00:33:46.494 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:46.494 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:46.494 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:46.753 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:46.753 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:46.753 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:47.012 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:47.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:47.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:47.012 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:47.277 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:47.277 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:47.277 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:47.536 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:47.536 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:47.536 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:47.536 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:47.796 No valid GPT data, bailing 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:47.796 00:33:47.796 Discovery Log Number of Records 2, Generation counter 2 00:33:47.796 =====Discovery Log Entry 0====== 00:33:47.796 trtype: tcp 00:33:47.796 adrfam: ipv4 00:33:47.796 subtype: current discovery subsystem 00:33:47.796 treq: not specified, sq flow control disable supported 00:33:47.796 portid: 1 00:33:47.796 trsvcid: 4420 00:33:47.796 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:47.796 traddr: 10.0.0.1 00:33:47.796 eflags: none 00:33:47.796 sectype: none 00:33:47.796 =====Discovery Log Entry 1====== 00:33:47.796 trtype: tcp 00:33:47.796 adrfam: ipv4 00:33:47.796 subtype: nvme subsystem 00:33:47.796 treq: not specified, sq flow control disable supported 00:33:47.796 portid: 1 00:33:47.796 trsvcid: 4420 00:33:47.796 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:47.796 traddr: 10.0.0.1 00:33:47.796 eflags: none 00:33:47.796 sectype: none 00:33:47.796 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:47.796 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:48.056 ===================================================== 00:33:48.056 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:48.056 ===================================================== 00:33:48.056 Controller Capabilities/Features 00:33:48.056 ================================ 00:33:48.056 Vendor ID: 0000 00:33:48.056 Subsystem Vendor ID: 0000 00:33:48.056 Serial Number: 3246f08d99f2e811eb08 00:33:48.056 Model Number: Linux 00:33:48.056 Firmware Version: 6.8.9-20 00:33:48.056 Recommended Arb Burst: 0 00:33:48.056 IEEE OUI Identifier: 00 00 00 00:33:48.056 Multi-path I/O 00:33:48.056 May have multiple subsystem ports: No 00:33:48.056 May have multiple controllers: No 00:33:48.056 Associated with SR-IOV VF: No 00:33:48.056 Max Data Transfer Size: Unlimited 00:33:48.056 Max Number of Namespaces: 0 00:33:48.056 Max Number of I/O Queues: 1024 00:33:48.056 NVMe Specification Version (VS): 1.3 00:33:48.056 NVMe Specification Version (Identify): 1.3 00:33:48.056 Maximum Queue Entries: 1024 00:33:48.056 Contiguous Queues Required: No 00:33:48.056 Arbitration Mechanisms Supported 00:33:48.056 Weighted Round Robin: Not Supported 00:33:48.056 Vendor Specific: Not Supported 00:33:48.056 Reset Timeout: 7500 ms 00:33:48.056 Doorbell Stride: 4 bytes 00:33:48.056 NVM Subsystem Reset: Not Supported 00:33:48.056 Command Sets Supported 00:33:48.056 NVM Command Set: Supported 00:33:48.056 Boot Partition: Not Supported 00:33:48.056 Memory Page Size Minimum: 4096 bytes 00:33:48.056 Memory Page Size Maximum: 4096 bytes 00:33:48.056 Persistent Memory Region: Not Supported 00:33:48.056 Optional Asynchronous Events Supported 00:33:48.056 Namespace Attribute Notices: Not Supported 00:33:48.056 Firmware Activation Notices: Not Supported 00:33:48.056 ANA Change Notices: Not Supported 00:33:48.056 PLE Aggregate Log Change Notices: Not Supported 00:33:48.056 LBA Status Info Alert Notices: Not Supported 00:33:48.056 EGE Aggregate Log Change Notices: Not Supported 00:33:48.056 Normal NVM Subsystem Shutdown event: Not Supported 00:33:48.056 Zone Descriptor Change Notices: Not Supported 00:33:48.056 Discovery Log Change Notices: Supported 00:33:48.056 Controller Attributes 00:33:48.056 128-bit Host Identifier: Not Supported 00:33:48.056 Non-Operational Permissive Mode: Not Supported 00:33:48.056 NVM Sets: Not Supported 00:33:48.056 Read Recovery Levels: Not Supported 00:33:48.056 Endurance Groups: Not Supported 00:33:48.056 Predictable Latency Mode: Not Supported 00:33:48.056 Traffic Based Keep ALive: Not Supported 00:33:48.056 Namespace Granularity: Not Supported 00:33:48.056 SQ Associations: Not Supported 00:33:48.056 UUID List: Not Supported 00:33:48.056 Multi-Domain Subsystem: Not Supported 00:33:48.056 Fixed Capacity Management: Not Supported 00:33:48.056 Variable Capacity Management: Not Supported 00:33:48.056 Delete Endurance Group: Not Supported 00:33:48.056 Delete NVM Set: Not Supported 00:33:48.056 Extended LBA Formats Supported: Not Supported 00:33:48.056 Flexible Data Placement Supported: Not Supported 00:33:48.056 00:33:48.056 Controller Memory Buffer Support 00:33:48.056 ================================ 00:33:48.056 Supported: No 00:33:48.056 00:33:48.056 Persistent Memory Region Support 00:33:48.056 ================================ 00:33:48.056 Supported: No 00:33:48.056 00:33:48.056 Admin Command Set Attributes 00:33:48.056 ============================ 00:33:48.056 Security Send/Receive: Not Supported 00:33:48.056 Format NVM: Not Supported 00:33:48.056 Firmware Activate/Download: Not Supported 00:33:48.056 Namespace Management: Not Supported 00:33:48.056 Device Self-Test: Not Supported 00:33:48.056 Directives: Not Supported 00:33:48.056 NVMe-MI: Not Supported 00:33:48.056 Virtualization Management: Not Supported 00:33:48.056 Doorbell Buffer Config: Not Supported 00:33:48.056 Get LBA Status Capability: Not Supported 00:33:48.056 Command & Feature Lockdown Capability: Not Supported 00:33:48.056 Abort Command Limit: 1 00:33:48.056 Async Event Request Limit: 1 00:33:48.056 Number of Firmware Slots: N/A 00:33:48.056 Firmware Slot 1 Read-Only: N/A 00:33:48.056 Firmware Activation Without Reset: N/A 00:33:48.056 Multiple Update Detection Support: N/A 00:33:48.056 Firmware Update Granularity: No Information Provided 00:33:48.056 Per-Namespace SMART Log: No 00:33:48.056 Asymmetric Namespace Access Log Page: Not Supported 00:33:48.056 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:48.056 Command Effects Log Page: Not Supported 00:33:48.056 Get Log Page Extended Data: Supported 00:33:48.056 Telemetry Log Pages: Not Supported 00:33:48.056 Persistent Event Log Pages: Not Supported 00:33:48.056 Supported Log Pages Log Page: May Support 00:33:48.056 Commands Supported & Effects Log Page: Not Supported 00:33:48.056 Feature Identifiers & Effects Log Page:May Support 00:33:48.056 NVMe-MI Commands & Effects Log Page: May Support 00:33:48.057 Data Area 4 for Telemetry Log: Not Supported 00:33:48.057 Error Log Page Entries Supported: 1 00:33:48.057 Keep Alive: Not Supported 00:33:48.057 00:33:48.057 NVM Command Set Attributes 00:33:48.057 ========================== 00:33:48.057 Submission Queue Entry Size 00:33:48.057 Max: 1 00:33:48.057 Min: 1 00:33:48.057 Completion Queue Entry Size 00:33:48.057 Max: 1 00:33:48.057 Min: 1 00:33:48.057 Number of Namespaces: 0 00:33:48.057 Compare Command: Not Supported 00:33:48.057 Write Uncorrectable Command: Not Supported 00:33:48.057 Dataset Management Command: Not Supported 00:33:48.057 Write Zeroes Command: Not Supported 00:33:48.057 Set Features Save Field: Not Supported 00:33:48.057 Reservations: Not Supported 00:33:48.057 Timestamp: Not Supported 00:33:48.057 Copy: Not Supported 00:33:48.057 Volatile Write Cache: Not Present 00:33:48.057 Atomic Write Unit (Normal): 1 00:33:48.057 Atomic Write Unit (PFail): 1 00:33:48.057 Atomic Compare & Write Unit: 1 00:33:48.057 Fused Compare & Write: Not Supported 00:33:48.057 Scatter-Gather List 00:33:48.057 SGL Command Set: Supported 00:33:48.057 SGL Keyed: Not Supported 00:33:48.057 SGL Bit Bucket Descriptor: Not Supported 00:33:48.057 SGL Metadata Pointer: Not Supported 00:33:48.057 Oversized SGL: Not Supported 00:33:48.057 SGL Metadata Address: Not Supported 00:33:48.057 SGL Offset: Supported 00:33:48.057 Transport SGL Data Block: Not Supported 00:33:48.057 Replay Protected Memory Block: Not Supported 00:33:48.057 00:33:48.057 Firmware Slot Information 00:33:48.057 ========================= 00:33:48.057 Active slot: 0 00:33:48.057 00:33:48.057 00:33:48.057 Error Log 00:33:48.057 ========= 00:33:48.057 00:33:48.057 Active Namespaces 00:33:48.057 ================= 00:33:48.057 Discovery Log Page 00:33:48.057 ================== 00:33:48.057 Generation Counter: 2 00:33:48.057 Number of Records: 2 00:33:48.057 Record Format: 0 00:33:48.057 00:33:48.057 Discovery Log Entry 0 00:33:48.057 ---------------------- 00:33:48.057 Transport Type: 3 (TCP) 00:33:48.057 Address Family: 1 (IPv4) 00:33:48.057 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:48.057 Entry Flags: 00:33:48.057 Duplicate Returned Information: 0 00:33:48.057 Explicit Persistent Connection Support for Discovery: 0 00:33:48.057 Transport Requirements: 00:33:48.057 Secure Channel: Not Specified 00:33:48.057 Port ID: 1 (0x0001) 00:33:48.057 Controller ID: 65535 (0xffff) 00:33:48.057 Admin Max SQ Size: 32 00:33:48.057 Transport Service Identifier: 4420 00:33:48.057 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:48.057 Transport Address: 10.0.0.1 00:33:48.057 Discovery Log Entry 1 00:33:48.057 ---------------------- 00:33:48.057 Transport Type: 3 (TCP) 00:33:48.057 Address Family: 1 (IPv4) 00:33:48.057 Subsystem Type: 2 (NVM Subsystem) 00:33:48.057 Entry Flags: 00:33:48.057 Duplicate Returned Information: 0 00:33:48.057 Explicit Persistent Connection Support for Discovery: 0 00:33:48.057 Transport Requirements: 00:33:48.057 Secure Channel: Not Specified 00:33:48.057 Port ID: 1 (0x0001) 00:33:48.057 Controller ID: 65535 (0xffff) 00:33:48.057 Admin Max SQ Size: 32 00:33:48.057 Transport Service Identifier: 4420 00:33:48.057 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:48.057 Transport Address: 10.0.0.1 00:33:48.057 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:48.057 get_feature(0x01) failed 00:33:48.057 get_feature(0x02) failed 00:33:48.057 get_feature(0x04) failed 00:33:48.057 ===================================================== 00:33:48.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:48.057 ===================================================== 00:33:48.057 Controller Capabilities/Features 00:33:48.057 ================================ 00:33:48.057 Vendor ID: 0000 00:33:48.057 Subsystem Vendor ID: 0000 00:33:48.057 Serial Number: 9c2e7532758f70bd3aa3 00:33:48.057 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:48.057 Firmware Version: 6.8.9-20 00:33:48.057 Recommended Arb Burst: 6 00:33:48.057 IEEE OUI Identifier: 00 00 00 00:33:48.057 Multi-path I/O 00:33:48.057 May have multiple subsystem ports: Yes 00:33:48.057 May have multiple controllers: Yes 00:33:48.057 Associated with SR-IOV VF: No 00:33:48.057 Max Data Transfer Size: Unlimited 00:33:48.057 Max Number of Namespaces: 1024 00:33:48.057 Max Number of I/O Queues: 128 00:33:48.057 NVMe Specification Version (VS): 1.3 00:33:48.057 NVMe Specification Version (Identify): 1.3 00:33:48.057 Maximum Queue Entries: 1024 00:33:48.057 Contiguous Queues Required: No 00:33:48.057 Arbitration Mechanisms Supported 00:33:48.057 Weighted Round Robin: Not Supported 00:33:48.057 Vendor Specific: Not Supported 00:33:48.057 Reset Timeout: 7500 ms 00:33:48.057 Doorbell Stride: 4 bytes 00:33:48.057 NVM Subsystem Reset: Not Supported 00:33:48.057 Command Sets Supported 00:33:48.057 NVM Command Set: Supported 00:33:48.057 Boot Partition: Not Supported 00:33:48.057 Memory Page Size Minimum: 4096 bytes 00:33:48.057 Memory Page Size Maximum: 4096 bytes 00:33:48.057 Persistent Memory Region: Not Supported 00:33:48.057 Optional Asynchronous Events Supported 00:33:48.057 Namespace Attribute Notices: Supported 00:33:48.057 Firmware Activation Notices: Not Supported 00:33:48.057 ANA Change Notices: Supported 00:33:48.057 PLE Aggregate Log Change Notices: Not Supported 00:33:48.057 LBA Status Info Alert Notices: Not Supported 00:33:48.057 EGE Aggregate Log Change Notices: Not Supported 00:33:48.057 Normal NVM Subsystem Shutdown event: Not Supported 00:33:48.057 Zone Descriptor Change Notices: Not Supported 00:33:48.057 Discovery Log Change Notices: Not Supported 00:33:48.057 Controller Attributes 00:33:48.057 128-bit Host Identifier: Supported 00:33:48.057 Non-Operational Permissive Mode: Not Supported 00:33:48.057 NVM Sets: Not Supported 00:33:48.057 Read Recovery Levels: Not Supported 00:33:48.057 Endurance Groups: Not Supported 00:33:48.057 Predictable Latency Mode: Not Supported 00:33:48.057 Traffic Based Keep ALive: Supported 00:33:48.057 Namespace Granularity: Not Supported 00:33:48.057 SQ Associations: Not Supported 00:33:48.057 UUID List: Not Supported 00:33:48.057 Multi-Domain Subsystem: Not Supported 00:33:48.057 Fixed Capacity Management: Not Supported 00:33:48.057 Variable Capacity Management: Not Supported 00:33:48.057 Delete Endurance Group: Not Supported 00:33:48.057 Delete NVM Set: Not Supported 00:33:48.057 Extended LBA Formats Supported: Not Supported 00:33:48.057 Flexible Data Placement Supported: Not Supported 00:33:48.057 00:33:48.057 Controller Memory Buffer Support 00:33:48.057 ================================ 00:33:48.057 Supported: No 00:33:48.057 00:33:48.057 Persistent Memory Region Support 00:33:48.057 ================================ 00:33:48.057 Supported: No 00:33:48.057 00:33:48.057 Admin Command Set Attributes 00:33:48.057 ============================ 00:33:48.057 Security Send/Receive: Not Supported 00:33:48.057 Format NVM: Not Supported 00:33:48.057 Firmware Activate/Download: Not Supported 00:33:48.057 Namespace Management: Not Supported 00:33:48.057 Device Self-Test: Not Supported 00:33:48.057 Directives: Not Supported 00:33:48.057 NVMe-MI: Not Supported 00:33:48.057 Virtualization Management: Not Supported 00:33:48.057 Doorbell Buffer Config: Not Supported 00:33:48.057 Get LBA Status Capability: Not Supported 00:33:48.057 Command & Feature Lockdown Capability: Not Supported 00:33:48.057 Abort Command Limit: 4 00:33:48.057 Async Event Request Limit: 4 00:33:48.057 Number of Firmware Slots: N/A 00:33:48.057 Firmware Slot 1 Read-Only: N/A 00:33:48.057 Firmware Activation Without Reset: N/A 00:33:48.057 Multiple Update Detection Support: N/A 00:33:48.057 Firmware Update Granularity: No Information Provided 00:33:48.057 Per-Namespace SMART Log: Yes 00:33:48.057 Asymmetric Namespace Access Log Page: Supported 00:33:48.057 ANA Transition Time : 10 sec 00:33:48.057 00:33:48.057 Asymmetric Namespace Access Capabilities 00:33:48.057 ANA Optimized State : Supported 00:33:48.057 ANA Non-Optimized State : Supported 00:33:48.057 ANA Inaccessible State : Supported 00:33:48.057 ANA Persistent Loss State : Supported 00:33:48.057 ANA Change State : Supported 00:33:48.057 ANAGRPID is not changed : No 00:33:48.057 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:48.057 00:33:48.057 ANA Group Identifier Maximum : 128 00:33:48.057 Number of ANA Group Identifiers : 128 00:33:48.057 Max Number of Allowed Namespaces : 1024 00:33:48.057 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:48.057 Command Effects Log Page: Supported 00:33:48.057 Get Log Page Extended Data: Supported 00:33:48.057 Telemetry Log Pages: Not Supported 00:33:48.057 Persistent Event Log Pages: Not Supported 00:33:48.057 Supported Log Pages Log Page: May Support 00:33:48.057 Commands Supported & Effects Log Page: Not Supported 00:33:48.057 Feature Identifiers & Effects Log Page:May Support 00:33:48.058 NVMe-MI Commands & Effects Log Page: May Support 00:33:48.058 Data Area 4 for Telemetry Log: Not Supported 00:33:48.058 Error Log Page Entries Supported: 128 00:33:48.058 Keep Alive: Supported 00:33:48.058 Keep Alive Granularity: 1000 ms 00:33:48.058 00:33:48.058 NVM Command Set Attributes 00:33:48.058 ========================== 00:33:48.058 Submission Queue Entry Size 00:33:48.058 Max: 64 00:33:48.058 Min: 64 00:33:48.058 Completion Queue Entry Size 00:33:48.058 Max: 16 00:33:48.058 Min: 16 00:33:48.058 Number of Namespaces: 1024 00:33:48.058 Compare Command: Not Supported 00:33:48.058 Write Uncorrectable Command: Not Supported 00:33:48.058 Dataset Management Command: Supported 00:33:48.058 Write Zeroes Command: Supported 00:33:48.058 Set Features Save Field: Not Supported 00:33:48.058 Reservations: Not Supported 00:33:48.058 Timestamp: Not Supported 00:33:48.058 Copy: Not Supported 00:33:48.058 Volatile Write Cache: Present 00:33:48.058 Atomic Write Unit (Normal): 1 00:33:48.058 Atomic Write Unit (PFail): 1 00:33:48.058 Atomic Compare & Write Unit: 1 00:33:48.058 Fused Compare & Write: Not Supported 00:33:48.058 Scatter-Gather List 00:33:48.058 SGL Command Set: Supported 00:33:48.058 SGL Keyed: Not Supported 00:33:48.058 SGL Bit Bucket Descriptor: Not Supported 00:33:48.058 SGL Metadata Pointer: Not Supported 00:33:48.058 Oversized SGL: Not Supported 00:33:48.058 SGL Metadata Address: Not Supported 00:33:48.058 SGL Offset: Supported 00:33:48.058 Transport SGL Data Block: Not Supported 00:33:48.058 Replay Protected Memory Block: Not Supported 00:33:48.058 00:33:48.058 Firmware Slot Information 00:33:48.058 ========================= 00:33:48.058 Active slot: 0 00:33:48.058 00:33:48.058 Asymmetric Namespace Access 00:33:48.058 =========================== 00:33:48.058 Change Count : 0 00:33:48.058 Number of ANA Group Descriptors : 1 00:33:48.058 ANA Group Descriptor : 0 00:33:48.058 ANA Group ID : 1 00:33:48.058 Number of NSID Values : 1 00:33:48.058 Change Count : 0 00:33:48.058 ANA State : 1 00:33:48.058 Namespace Identifier : 1 00:33:48.058 00:33:48.058 Commands Supported and Effects 00:33:48.058 ============================== 00:33:48.058 Admin Commands 00:33:48.058 -------------- 00:33:48.058 Get Log Page (02h): Supported 00:33:48.058 Identify (06h): Supported 00:33:48.058 Abort (08h): Supported 00:33:48.058 Set Features (09h): Supported 00:33:48.058 Get Features (0Ah): Supported 00:33:48.058 Asynchronous Event Request (0Ch): Supported 00:33:48.058 Keep Alive (18h): Supported 00:33:48.058 I/O Commands 00:33:48.058 ------------ 00:33:48.058 Flush (00h): Supported 00:33:48.058 Write (01h): Supported LBA-Change 00:33:48.058 Read (02h): Supported 00:33:48.058 Write Zeroes (08h): Supported LBA-Change 00:33:48.058 Dataset Management (09h): Supported 00:33:48.058 00:33:48.058 Error Log 00:33:48.058 ========= 00:33:48.058 Entry: 0 00:33:48.058 Error Count: 0x3 00:33:48.058 Submission Queue Id: 0x0 00:33:48.058 Command Id: 0x5 00:33:48.058 Phase Bit: 0 00:33:48.058 Status Code: 0x2 00:33:48.058 Status Code Type: 0x0 00:33:48.058 Do Not Retry: 1 00:33:48.058 Error Location: 0x28 00:33:48.058 LBA: 0x0 00:33:48.058 Namespace: 0x0 00:33:48.058 Vendor Log Page: 0x0 00:33:48.058 ----------- 00:33:48.058 Entry: 1 00:33:48.058 Error Count: 0x2 00:33:48.058 Submission Queue Id: 0x0 00:33:48.058 Command Id: 0x5 00:33:48.058 Phase Bit: 0 00:33:48.058 Status Code: 0x2 00:33:48.058 Status Code Type: 0x0 00:33:48.058 Do Not Retry: 1 00:33:48.058 Error Location: 0x28 00:33:48.058 LBA: 0x0 00:33:48.058 Namespace: 0x0 00:33:48.058 Vendor Log Page: 0x0 00:33:48.058 ----------- 00:33:48.058 Entry: 2 00:33:48.058 Error Count: 0x1 00:33:48.058 Submission Queue Id: 0x0 00:33:48.058 Command Id: 0x4 00:33:48.058 Phase Bit: 0 00:33:48.058 Status Code: 0x2 00:33:48.058 Status Code Type: 0x0 00:33:48.058 Do Not Retry: 1 00:33:48.058 Error Location: 0x28 00:33:48.058 LBA: 0x0 00:33:48.058 Namespace: 0x0 00:33:48.058 Vendor Log Page: 0x0 00:33:48.058 00:33:48.058 Number of Queues 00:33:48.058 ================ 00:33:48.058 Number of I/O Submission Queues: 128 00:33:48.058 Number of I/O Completion Queues: 128 00:33:48.058 00:33:48.058 ZNS Specific Controller Data 00:33:48.058 ============================ 00:33:48.058 Zone Append Size Limit: 0 00:33:48.058 00:33:48.058 00:33:48.058 Active Namespaces 00:33:48.058 ================= 00:33:48.058 get_feature(0x05) failed 00:33:48.058 Namespace ID:1 00:33:48.058 Command Set Identifier: NVM (00h) 00:33:48.058 Deallocate: Supported 00:33:48.058 Deallocated/Unwritten Error: Not Supported 00:33:48.058 Deallocated Read Value: Unknown 00:33:48.058 Deallocate in Write Zeroes: Not Supported 00:33:48.058 Deallocated Guard Field: 0xFFFF 00:33:48.058 Flush: Supported 00:33:48.058 Reservation: Not Supported 00:33:48.058 Namespace Sharing Capabilities: Multiple Controllers 00:33:48.058 Size (in LBAs): 1953525168 (931GiB) 00:33:48.058 Capacity (in LBAs): 1953525168 (931GiB) 00:33:48.058 Utilization (in LBAs): 1953525168 (931GiB) 00:33:48.058 UUID: c74bd870-703b-47b7-929c-5f5d544f103a 00:33:48.058 Thin Provisioning: Not Supported 00:33:48.058 Per-NS Atomic Units: Yes 00:33:48.058 Atomic Boundary Size (Normal): 0 00:33:48.058 Atomic Boundary Size (PFail): 0 00:33:48.058 Atomic Boundary Offset: 0 00:33:48.058 NGUID/EUI64 Never Reused: No 00:33:48.058 ANA group ID: 1 00:33:48.058 Namespace Write Protected: No 00:33:48.058 Number of LBA Formats: 1 00:33:48.058 Current LBA Format: LBA Format #00 00:33:48.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:48.058 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.058 rmmod nvme_tcp 00:33:48.058 rmmod nvme_fabrics 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.058 13:13:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:50.594 13:13:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:53.131 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:53.131 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:54.070 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:54.070 00:33:54.070 real 0m16.607s 00:33:54.070 user 0m4.276s 00:33:54.070 sys 0m8.665s 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.070 ************************************ 00:33:54.070 END TEST nvmf_identify_kernel_target 00:33:54.070 ************************************ 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:54.070 13:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.329 ************************************ 00:33:54.329 START TEST nvmf_auth_host 00:33:54.329 ************************************ 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:54.330 * Looking for test storage... 00:33:54.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.330 --rc genhtml_branch_coverage=1 00:33:54.330 --rc genhtml_function_coverage=1 00:33:54.330 --rc genhtml_legend=1 00:33:54.330 --rc geninfo_all_blocks=1 00:33:54.330 --rc geninfo_unexecuted_blocks=1 00:33:54.330 00:33:54.330 ' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.330 --rc genhtml_branch_coverage=1 00:33:54.330 --rc genhtml_function_coverage=1 00:33:54.330 --rc genhtml_legend=1 00:33:54.330 --rc geninfo_all_blocks=1 00:33:54.330 --rc geninfo_unexecuted_blocks=1 00:33:54.330 00:33:54.330 ' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.330 --rc genhtml_branch_coverage=1 00:33:54.330 --rc genhtml_function_coverage=1 00:33:54.330 --rc genhtml_legend=1 00:33:54.330 --rc geninfo_all_blocks=1 00:33:54.330 --rc geninfo_unexecuted_blocks=1 00:33:54.330 00:33:54.330 ' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:54.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.330 --rc genhtml_branch_coverage=1 00:33:54.330 --rc genhtml_function_coverage=1 00:33:54.330 --rc genhtml_legend=1 00:33:54.330 --rc geninfo_all_blocks=1 00:33:54.330 --rc geninfo_unexecuted_blocks=1 00:33:54.330 00:33:54.330 ' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:54.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:54.330 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.331 13:14:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:00.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:00.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:00.901 Found net devices under 0000:af:00.0: cvl_0_0 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:00.901 Found net devices under 0000:af:00.1: cvl_0_1 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.901 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.902 13:14:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:34:00.902 00:34:00.902 --- 10.0.0.2 ping statistics --- 00:34:00.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.902 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:34:00.902 00:34:00.902 --- 10.0.0.1 ping statistics --- 00:34:00.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.902 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1179952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1179952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1179952 ']' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7933f5a16961a4959cd678b435d5ae5 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6v7 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7933f5a16961a4959cd678b435d5ae5 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7933f5a16961a4959cd678b435d5ae5 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7933f5a16961a4959cd678b435d5ae5 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6v7 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6v7 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6v7 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f1b1094abbad2aa34cb357259a1f348a96dbabd60d48525e3dbf382de97b39c0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f1b1094abbad2aa34cb357259a1f348a96dbabd60d48525e3dbf382de97b39c0 3 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f1b1094abbad2aa34cb357259a1f348a96dbabd60d48525e3dbf382de97b39c0 3 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f1b1094abbad2aa34cb357259a1f348a96dbabd60d48525e3dbf382de97b39c0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.952 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5714985f1000e09eaf76ce68ba51263b34f0e66e477d5c4 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MQM 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5714985f1000e09eaf76ce68ba51263b34f0e66e477d5c4 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5714985f1000e09eaf76ce68ba51263b34f0e66e477d5c4 0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5714985f1000e09eaf76ce68ba51263b34f0e66e477d5c4 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:00.902 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MQM 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MQM 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MQM 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4feb3d4ea2057f632eaf589727b2072ba5b4298cdab97496 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.omv 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4feb3d4ea2057f632eaf589727b2072ba5b4298cdab97496 2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4feb3d4ea2057f632eaf589727b2072ba5b4298cdab97496 2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4feb3d4ea2057f632eaf589727b2072ba5b4298cdab97496 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.omv 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.omv 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.omv 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2155fd91e4acc77b5bcaef761bf9afa7 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tba 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2155fd91e4acc77b5bcaef761bf9afa7 1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2155fd91e4acc77b5bcaef761bf9afa7 1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2155fd91e4acc77b5bcaef761bf9afa7 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tba 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tba 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Tba 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c91101db2e734c060d63a3ecafea34b6 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3jD 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c91101db2e734c060d63a3ecafea34b6 1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c91101db2e734c060d63a3ecafea34b6 1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c91101db2e734c060d63a3ecafea34b6 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3jD 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3jD 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3jD 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e78be0ac51ab679a8a97e752157838df6c472293889e601e 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Qzg 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e78be0ac51ab679a8a97e752157838df6c472293889e601e 2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e78be0ac51ab679a8a97e752157838df6c472293889e601e 2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e78be0ac51ab679a8a97e752157838df6c472293889e601e 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:00.903 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Qzg 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Qzg 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Qzg 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0a9ab42c60e7ed9e874200659ae5d9b 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iK4 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0a9ab42c60e7ed9e874200659ae5d9b 0 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0a9ab42c60e7ed9e874200659ae5d9b 0 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0a9ab42c60e7ed9e874200659ae5d9b 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iK4 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iK4 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iK4 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cda3e5bd6e9cd4fbdd41eb02bc04bfbfe7787944276936ff03b1ae3777b2cd55 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LKa 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cda3e5bd6e9cd4fbdd41eb02bc04bfbfe7787944276936ff03b1ae3777b2cd55 3 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cda3e5bd6e9cd4fbdd41eb02bc04bfbfe7787944276936ff03b1ae3777b2cd55 3 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cda3e5bd6e9cd4fbdd41eb02bc04bfbfe7787944276936ff03b1ae3777b2cd55 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LKa 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LKa 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.LKa 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1179952 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1179952 ']' 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.163 13:14:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6v7 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.952 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.952 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MQM 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.omv ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.omv 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Tba 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3jD ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3jD 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Qzg 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iK4 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iK4 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LKa 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:01.422 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:01.423 13:14:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:03.958 Waiting for block devices as requested 00:34:04.217 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:04.217 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:04.217 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:04.476 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:04.476 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:04.476 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:04.476 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:04.735 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:04.735 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:04.735 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:04.994 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:04.994 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:04.994 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:04.994 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:05.253 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:05.253 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:05.253 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:05.821 No valid GPT data, bailing 00:34:05.821 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:06.081 00:34:06.081 Discovery Log Number of Records 2, Generation counter 2 00:34:06.081 =====Discovery Log Entry 0====== 00:34:06.081 trtype: tcp 00:34:06.081 adrfam: ipv4 00:34:06.081 subtype: current discovery subsystem 00:34:06.081 treq: not specified, sq flow control disable supported 00:34:06.081 portid: 1 00:34:06.081 trsvcid: 4420 00:34:06.081 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:06.081 traddr: 10.0.0.1 00:34:06.081 eflags: none 00:34:06.081 sectype: none 00:34:06.081 =====Discovery Log Entry 1====== 00:34:06.081 trtype: tcp 00:34:06.081 adrfam: ipv4 00:34:06.081 subtype: nvme subsystem 00:34:06.081 treq: not specified, sq flow control disable supported 00:34:06.081 portid: 1 00:34:06.081 trsvcid: 4420 00:34:06.081 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:06.081 traddr: 10.0.0.1 00:34:06.081 eflags: none 00:34:06.081 sectype: none 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:06.081 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.082 13:14:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.341 nvme0n1 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.341 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.342 nvme0n1 00:34:06.342 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 nvme0n1 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.601 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 nvme0n1 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.861 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 nvme0n1 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.121 13:14:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.380 nvme0n1 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.380 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.381 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.639 nvme0n1 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.639 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.640 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.899 nvme0n1 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.899 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.158 nvme0n1 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:08.158 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.159 13:14:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 nvme0n1 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.418 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.678 nvme0n1 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.678 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.938 nvme0n1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.938 13:14:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.197 nvme0n1 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.197 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.456 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.457 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.716 nvme0n1 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.716 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.976 nvme0n1 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.976 13:14:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.235 nvme0n1 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.235 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.236 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.804 nvme0n1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.804 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.063 nvme0n1 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.063 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.322 13:14:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.322 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.323 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.581 nvme0n1 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.582 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.148 nvme0n1 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.148 13:14:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.408 nvme0n1 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.408 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.667 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.668 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.668 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.668 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.235 nvme0n1 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.235 13:14:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.235 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.803 nvme0n1 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.803 13:14:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.371 nvme0n1 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.371 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.630 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.261 nvme0n1 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.261 13:14:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.847 nvme0n1 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.847 nvme0n1 00:34:15.847 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.106 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.107 nvme0n1 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.107 13:14:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.366 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.367 nvme0n1 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:16.367 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.626 nvme0n1 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.626 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.627 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 nvme0n1 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.886 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.887 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.146 nvme0n1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.146 13:14:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.406 nvme0n1 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.406 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.664 nvme0n1 00:34:17.664 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.664 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.664 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.665 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.924 nvme0n1 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.924 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.184 nvme0n1 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.184 13:14:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.444 nvme0n1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.444 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.703 nvme0n1 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.703 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.962 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.963 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.963 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.963 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.963 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 nvme0n1 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.222 13:14:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.482 nvme0n1 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.482 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.741 nvme0n1 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.741 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.742 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.310 nvme0n1 00:34:20.310 13:14:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.310 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.569 nvme0n1 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.569 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.829 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.088 nvme0n1 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.088 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.089 13:14:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.656 nvme0n1 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.656 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.657 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.657 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.657 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.915 nvme0n1 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.915 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.174 13:14:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 nvme0n1 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.741 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.742 13:14:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.310 nvme0n1 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.310 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 nvme0n1 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.878 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.137 13:14:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 nvme0n1 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.705 13:14:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.273 nvme0n1 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.273 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.274 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.532 nvme0n1 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.532 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.533 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.792 nvme0n1 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.792 nvme0n1 00:34:25.792 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.051 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.051 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.052 nvme0n1 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.052 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.311 13:14:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 nvme0n1 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.311 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.571 nvme0n1 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.571 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.831 nvme0n1 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:26.831 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 nvme0n1 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.090 13:14:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.353 nvme0n1 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.353 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.354 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.614 nvme0n1 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.614 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.873 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.873 nvme0n1 00:34:27.873 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.873 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.873 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.874 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.874 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.874 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:28.132 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.133 13:14:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.392 nvme0n1 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:28.392 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.393 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.652 nvme0n1 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.652 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.911 nvme0n1 00:34:28.911 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.911 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.911 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.912 13:14:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.170 nvme0n1 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.170 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.429 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.688 nvme0n1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.688 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.255 nvme0n1 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.255 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.256 13:14:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.256 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.515 nvme0n1 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.515 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.774 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.775 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.033 nvme0n1 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.033 13:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.601 nvme0n1 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc5MzNmNWExNjk2MWE0OTU5Y2Q2NzhiNDM1ZDVhZTVinRCZ: 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjFiMTA5NGFiYmFkMmFhMzRjYjM1NzI1OWExZjM0OGE5NmRiYWJkNjBkNDg1MjVlM2RiZjM4MmRlOTdiMzljMIobtwo=: 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.601 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.169 nvme0n1 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.169 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.170 13:14:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 nvme0n1 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.738 13:14:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.674 nvme0n1 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTc4YmUwYWM1MWFiNjc5YThhOTdlNzUyMTU3ODM4ZGY2YzQ3MjI5Mzg4OWU2MDFlFJI2Dw==: 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDBhOWFiNDJjNjBlN2VkOWU4NzQyMDA2NTlhZTVkOWLTU8LI: 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.674 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.242 nvme0n1 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2RhM2U1YmQ2ZTljZDRmYmRkNDFlYjAyYmMwNGJmYmZlNzc4Nzk0NDI3NjkzNmZmMDNiMWFlMzc3N2IyY2Q1NS00VyQ=: 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.242 13:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.810 nvme0n1 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:34.810 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.811 request: 00:34:34.811 { 00:34:34.811 "name": "nvme0", 00:34:34.811 "trtype": "tcp", 00:34:34.811 "traddr": "10.0.0.1", 00:34:34.811 "adrfam": "ipv4", 00:34:34.811 "trsvcid": "4420", 00:34:34.811 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:34.811 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:34.811 "prchk_reftag": false, 00:34:34.811 "prchk_guard": false, 00:34:34.811 "hdgst": false, 00:34:34.811 "ddgst": false, 00:34:34.811 "allow_unrecognized_csi": false, 00:34:34.811 "method": "bdev_nvme_attach_controller", 00:34:34.811 "req_id": 1 00:34:34.811 } 00:34:34.811 Got JSON-RPC error response 00:34:34.811 response: 00:34:34.811 { 00:34:34.811 "code": -5, 00:34:34.811 "message": "Input/output error" 00:34:34.811 } 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.811 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.070 request: 00:34:35.070 { 00:34:35.070 "name": "nvme0", 00:34:35.070 "trtype": "tcp", 00:34:35.070 "traddr": "10.0.0.1", 00:34:35.070 "adrfam": "ipv4", 00:34:35.070 "trsvcid": "4420", 00:34:35.070 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.070 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.070 "prchk_reftag": false, 00:34:35.070 "prchk_guard": false, 00:34:35.070 "hdgst": false, 00:34:35.070 "ddgst": false, 00:34:35.070 "dhchap_key": "key2", 00:34:35.070 "allow_unrecognized_csi": false, 00:34:35.070 "method": "bdev_nvme_attach_controller", 00:34:35.070 "req_id": 1 00:34:35.070 } 00:34:35.070 Got JSON-RPC error response 00:34:35.070 response: 00:34:35.070 { 00:34:35.070 "code": -5, 00:34:35.070 "message": "Input/output error" 00:34:35.070 } 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.070 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.071 request: 00:34:35.071 { 00:34:35.071 "name": "nvme0", 00:34:35.071 "trtype": "tcp", 00:34:35.071 "traddr": "10.0.0.1", 00:34:35.071 "adrfam": "ipv4", 00:34:35.071 "trsvcid": "4420", 00:34:35.071 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.071 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.071 "prchk_reftag": false, 00:34:35.071 "prchk_guard": false, 00:34:35.071 "hdgst": false, 00:34:35.071 "ddgst": false, 00:34:35.071 "dhchap_key": "key1", 00:34:35.071 "dhchap_ctrlr_key": "ckey2", 00:34:35.071 "allow_unrecognized_csi": false, 00:34:35.071 "method": "bdev_nvme_attach_controller", 00:34:35.071 "req_id": 1 00:34:35.071 } 00:34:35.071 Got JSON-RPC error response 00:34:35.071 response: 00:34:35.071 { 00:34:35.071 "code": -5, 00:34:35.071 "message": "Input/output error" 00:34:35.071 } 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.071 13:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.330 nvme0n1 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.330 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.331 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.331 request: 00:34:35.331 { 00:34:35.331 "name": "nvme0", 00:34:35.331 "dhchap_key": "key1", 00:34:35.331 "dhchap_ctrlr_key": "ckey2", 00:34:35.331 "method": "bdev_nvme_set_keys", 00:34:35.331 "req_id": 1 00:34:35.331 } 00:34:35.331 Got JSON-RPC error response 00:34:35.331 response: 00:34:35.331 { 00:34:35.331 "code": -13, 00:34:35.589 "message": "Permission denied" 00:34:35.589 } 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:35.589 13:14:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:36.526 13:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:37.461 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.461 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:37.461 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.461 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.461 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.720 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTU3MTQ5ODVmMTAwMGUwOWVhZjc2Y2U2OGJhNTEyNjNiMzRmMGU2NmU0NzdkNWM0reiyQQ==: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGZlYjNkNGVhMjA1N2Y2MzJlYWY1ODk3MjdiMjA3MmJhNWI0Mjk4Y2RhYjk3NDk2ZuTgHA==: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.721 nvme0n1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE1NWZkOTFlNGFjYzc3YjViY2FlZjc2MWJmOWFmYTcTjUcn: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzkxMTAxZGIyZTczNGMwNjBkNjNhM2VjYWZlYTM0YjbbctSt: 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.721 request: 00:34:37.721 { 00:34:37.721 "name": "nvme0", 00:34:37.721 "dhchap_key": "key2", 00:34:37.721 "dhchap_ctrlr_key": "ckey1", 00:34:37.721 "method": "bdev_nvme_set_keys", 00:34:37.721 "req_id": 1 00:34:37.721 } 00:34:37.721 Got JSON-RPC error response 00:34:37.721 response: 00:34:37.721 { 00:34:37.721 "code": -13, 00:34:37.721 "message": "Permission denied" 00:34:37.721 } 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.721 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.980 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:37.980 13:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.917 rmmod nvme_tcp 00:34:38.917 rmmod nvme_fabrics 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1179952 ']' 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1179952 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1179952 ']' 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1179952 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179952 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179952' 00:34:38.917 killing process with pid 1179952 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1179952 00:34:38.917 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1179952 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.176 13:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.710 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.710 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:41.710 13:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:41.710 13:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:44.244 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:44.244 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:45.181 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:45.181 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6v7 /tmp/spdk.key-null.MQM /tmp/spdk.key-sha256.Tba /tmp/spdk.key-sha384.Qzg /tmp/spdk.key-sha512.LKa /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:45.181 13:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:47.715 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:47.715 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:47.715 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:47.715 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:47.715 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:47.715 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:47.715 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:47.974 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:47.974 00:34:47.974 real 0m53.789s 00:34:47.974 user 0m48.774s 00:34:47.974 sys 0m12.431s 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.974 ************************************ 00:34:47.974 END TEST nvmf_auth_host 00:34:47.974 ************************************ 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.974 ************************************ 00:34:47.974 START TEST nvmf_digest 00:34:47.974 ************************************ 00:34:47.974 13:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:48.234 * Looking for test storage... 00:34:48.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:48.234 13:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:48.234 13:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:34:48.234 13:14:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.234 --rc genhtml_branch_coverage=1 00:34:48.234 --rc genhtml_function_coverage=1 00:34:48.234 --rc genhtml_legend=1 00:34:48.234 --rc geninfo_all_blocks=1 00:34:48.234 --rc geninfo_unexecuted_blocks=1 00:34:48.234 00:34:48.234 ' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:48.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.234 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.235 13:14:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.803 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:54.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:54.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:54.804 Found net devices under 0000:af:00.0: cvl_0_0 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:54.804 Found net devices under 0000:af:00.1: cvl_0_1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:54.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:34:54.804 00:34:54.804 --- 10.0.0.2 ping statistics --- 00:34:54.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.804 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:34:54.804 00:34:54.804 --- 10.0.0.1 ping statistics --- 00:34:54.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.804 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:54.804 ************************************ 00:34:54.804 START TEST nvmf_digest_clean 00:34:54.804 ************************************ 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:54.804 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1193555 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1193555 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193555 ']' 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.805 13:15:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.805 [2024-12-15 13:15:01.972966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:54.805 [2024-12-15 13:15:01.973007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.805 [2024-12-15 13:15:02.053568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.805 [2024-12-15 13:15:02.074285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.805 [2024-12-15 13:15:02.074316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.805 [2024-12-15 13:15:02.074323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.805 [2024-12-15 13:15:02.074329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.805 [2024-12-15 13:15:02.074335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.805 [2024-12-15 13:15:02.074855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.805 null0 00:34:54.805 [2024-12-15 13:15:02.246472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.805 [2024-12-15 13:15:02.270666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1193582 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1193582 /var/tmp/bperf.sock 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193582 ']' 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.805 [2024-12-15 13:15:02.322349] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:54.805 [2024-12-15 13:15:02.322392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193582 ] 00:34:54.805 [2024-12-15 13:15:02.398862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.805 [2024-12-15 13:15:02.421355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:54.805 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:55.064 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.064 13:15:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.323 nvme0n1 00:34:55.323 13:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:55.323 13:15:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.582 Running I/O for 2 seconds... 00:34:57.453 26230.00 IOPS, 102.46 MiB/s [2024-12-15T12:15:05.360Z] 25538.00 IOPS, 99.76 MiB/s 00:34:57.453 Latency(us) 00:34:57.453 [2024-12-15T12:15:05.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:57.453 nvme0n1 : 2.00 25553.78 99.82 0.00 0.00 5004.73 2621.44 11734.06 00:34:57.453 [2024-12-15T12:15:05.360Z] =================================================================================================================== 00:34:57.453 [2024-12-15T12:15:05.360Z] Total : 25553.78 99.82 0.00 0.00 5004.73 2621.44 11734.06 00:34:57.453 { 00:34:57.453 "results": [ 00:34:57.453 { 00:34:57.453 "job": "nvme0n1", 00:34:57.453 "core_mask": "0x2", 00:34:57.453 "workload": "randread", 00:34:57.453 "status": "finished", 00:34:57.453 "queue_depth": 128, 00:34:57.453 "io_size": 4096, 00:34:57.453 "runtime": 2.003774, 00:34:57.453 "iops": 25553.780017107718, 00:34:57.453 "mibps": 99.81945319182702, 00:34:57.453 "io_failed": 0, 00:34:57.453 "io_timeout": 0, 00:34:57.453 "avg_latency_us": 5004.731138713121, 00:34:57.453 "min_latency_us": 2621.44, 00:34:57.453 "max_latency_us": 11734.064761904761 00:34:57.453 } 00:34:57.453 ], 00:34:57.453 "core_count": 1 00:34:57.453 } 00:34:57.453 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:57.453 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:57.453 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:57.453 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:57.453 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:57.453 | select(.opcode=="crc32c") 00:34:57.453 | "\(.module_name) \(.executed)"' 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1193582 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193582 ']' 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193582 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193582 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193582' 00:34:57.712 killing process with pid 1193582 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193582 00:34:57.712 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.712 00:34:57.712 Latency(us) 00:34:57.712 [2024-12-15T12:15:05.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.712 [2024-12-15T12:15:05.619Z] =================================================================================================================== 00:34:57.712 [2024-12-15T12:15:05.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.712 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193582 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194045 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194045 /var/tmp/bperf.sock 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194045 ']' 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:57.971 [2024-12-15 13:15:05.714057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:57.971 [2024-12-15 13:15:05.714108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194045 ] 00:34:57.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.971 Zero copy mechanism will not be used. 00:34:57.971 [2024-12-15 13:15:05.786514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.971 [2024-12-15 13:15:05.808504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:57.971 13:15:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:58.229 13:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.230 13:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.797 nvme0n1 00:34:58.797 13:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:58.797 13:15:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.797 Zero copy mechanism will not be used. 00:34:58.797 Running I/O for 2 seconds... 00:35:00.809 6213.00 IOPS, 776.62 MiB/s [2024-12-15T12:15:08.716Z] 6251.50 IOPS, 781.44 MiB/s 00:35:00.809 Latency(us) 00:35:00.809 [2024-12-15T12:15:08.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.809 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:00.809 nvme0n1 : 2.00 6250.64 781.33 0.00 0.00 2557.00 643.66 8488.47 00:35:00.809 [2024-12-15T12:15:08.716Z] =================================================================================================================== 00:35:00.809 [2024-12-15T12:15:08.716Z] Total : 6250.64 781.33 0.00 0.00 2557.00 643.66 8488.47 00:35:00.809 { 00:35:00.809 "results": [ 00:35:00.809 { 00:35:00.809 "job": "nvme0n1", 00:35:00.809 "core_mask": "0x2", 00:35:00.809 "workload": "randread", 00:35:00.809 "status": "finished", 00:35:00.809 "queue_depth": 16, 00:35:00.809 "io_size": 131072, 00:35:00.809 "runtime": 2.003155, 00:35:00.809 "iops": 6250.639616005751, 00:35:00.809 "mibps": 781.3299520007189, 00:35:00.809 "io_failed": 0, 00:35:00.809 "io_timeout": 0, 00:35:00.809 "avg_latency_us": 2556.9959301896624, 00:35:00.809 "min_latency_us": 643.6571428571428, 00:35:00.809 "max_latency_us": 8488.47238095238 00:35:00.809 } 00:35:00.809 ], 00:35:00.809 "core_count": 1 00:35:00.809 } 00:35:00.809 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:00.809 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:00.809 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:00.809 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:00.809 | select(.opcode=="crc32c") 00:35:00.809 | "\(.module_name) \(.executed)"' 00:35:00.809 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:01.068 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194045 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194045 ']' 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194045 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194045 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194045' 00:35:01.069 killing process with pid 1194045 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194045 00:35:01.069 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.069 00:35:01.069 Latency(us) 00:35:01.069 [2024-12-15T12:15:08.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.069 [2024-12-15T12:15:08.976Z] =================================================================================================================== 00:35:01.069 [2024-12-15T12:15:08.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.069 13:15:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194045 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195105 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195105 /var/tmp/bperf.sock 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195105 ']' 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.328 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.328 [2024-12-15 13:15:09.098725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:01.328 [2024-12-15 13:15:09.098772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195105 ] 00:35:01.328 [2024-12-15 13:15:09.173240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.328 [2024-12-15 13:15:09.194334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.587 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.587 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:01.587 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:01.587 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:01.587 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:01.846 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.846 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.105 nvme0n1 00:35:02.105 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:02.105 13:15:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.105 Running I/O for 2 seconds... 00:35:04.416 28154.00 IOPS, 109.98 MiB/s [2024-12-15T12:15:12.323Z] 28387.50 IOPS, 110.89 MiB/s 00:35:04.416 Latency(us) 00:35:04.416 [2024-12-15T12:15:12.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.416 nvme0n1 : 2.00 28417.24 111.00 0.00 0.00 4499.77 1810.04 6959.30 00:35:04.416 [2024-12-15T12:15:12.324Z] =================================================================================================================== 00:35:04.417 [2024-12-15T12:15:12.324Z] Total : 28417.24 111.00 0.00 0.00 4499.77 1810.04 6959.30 00:35:04.417 { 00:35:04.417 "results": [ 00:35:04.417 { 00:35:04.417 "job": "nvme0n1", 00:35:04.417 "core_mask": "0x2", 00:35:04.417 "workload": "randwrite", 00:35:04.417 "status": "finished", 00:35:04.417 "queue_depth": 128, 00:35:04.417 "io_size": 4096, 00:35:04.417 "runtime": 2.002411, 00:35:04.417 "iops": 28417.24301354717, 00:35:04.417 "mibps": 111.00485552166863, 00:35:04.417 "io_failed": 0, 00:35:04.417 "io_timeout": 0, 00:35:04.417 "avg_latency_us": 4499.76594997502, 00:35:04.417 "min_latency_us": 1810.0419047619048, 00:35:04.417 "max_latency_us": 6959.299047619048 00:35:04.417 } 00:35:04.417 ], 00:35:04.417 "core_count": 1 00:35:04.417 } 00:35:04.417 13:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:04.417 13:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:04.417 13:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:04.417 13:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:04.417 | select(.opcode=="crc32c") 00:35:04.417 | "\(.module_name) \(.executed)"' 00:35:04.417 13:15:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195105 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195105 ']' 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195105 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195105 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195105' 00:35:04.417 killing process with pid 1195105 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195105 00:35:04.417 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.417 00:35:04.417 Latency(us) 00:35:04.417 [2024-12-15T12:15:12.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.417 [2024-12-15T12:15:12.324Z] =================================================================================================================== 00:35:04.417 [2024-12-15T12:15:12.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.417 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195105 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195573 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195573 /var/tmp/bperf.sock 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195573 ']' 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.676 [2024-12-15 13:15:12.421272] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:04.676 [2024-12-15 13:15:12.421325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195573 ] 00:35:04.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.676 Zero copy mechanism will not be used. 00:35:04.676 [2024-12-15 13:15:12.495326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.676 [2024-12-15 13:15:12.514644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:04.676 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:04.936 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.936 13:15:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.195 nvme0n1 00:35:05.195 13:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:05.195 13:15:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.453 Zero copy mechanism will not be used. 00:35:05.453 Running I/O for 2 seconds... 00:35:07.327 6179.00 IOPS, 772.38 MiB/s [2024-12-15T12:15:15.234Z] 6241.00 IOPS, 780.12 MiB/s 00:35:07.327 Latency(us) 00:35:07.327 [2024-12-15T12:15:15.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.327 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:07.327 nvme0n1 : 2.00 6238.45 779.81 0.00 0.00 2560.80 1443.35 4868.39 00:35:07.327 [2024-12-15T12:15:15.234Z] =================================================================================================================== 00:35:07.327 [2024-12-15T12:15:15.234Z] Total : 6238.45 779.81 0.00 0.00 2560.80 1443.35 4868.39 00:35:07.327 { 00:35:07.327 "results": [ 00:35:07.327 { 00:35:07.327 "job": "nvme0n1", 00:35:07.327 "core_mask": "0x2", 00:35:07.327 "workload": "randwrite", 00:35:07.327 "status": "finished", 00:35:07.327 "queue_depth": 16, 00:35:07.327 "io_size": 131072, 00:35:07.327 "runtime": 2.003542, 00:35:07.327 "iops": 6238.451702035695, 00:35:07.327 "mibps": 779.8064627544619, 00:35:07.327 "io_failed": 0, 00:35:07.327 "io_timeout": 0, 00:35:07.327 "avg_latency_us": 2560.803114306287, 00:35:07.327 "min_latency_us": 1443.352380952381, 00:35:07.327 "max_latency_us": 4868.388571428572 00:35:07.327 } 00:35:07.327 ], 00:35:07.327 "core_count": 1 00:35:07.327 } 00:35:07.327 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:07.327 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:07.327 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:07.327 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:07.327 | select(.opcode=="crc32c") 00:35:07.327 | "\(.module_name) \(.executed)"' 00:35:07.327 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195573 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195573 ']' 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195573 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195573 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195573' 00:35:07.586 killing process with pid 1195573 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195573 00:35:07.586 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.586 00:35:07.586 Latency(us) 00:35:07.586 [2024-12-15T12:15:15.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.586 [2024-12-15T12:15:15.493Z] =================================================================================================================== 00:35:07.586 [2024-12-15T12:15:15.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.586 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195573 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1193555 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193555 ']' 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193555 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193555 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193555' 00:35:07.845 killing process with pid 1193555 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193555 00:35:07.845 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193555 00:35:08.104 00:35:08.104 real 0m13.902s 00:35:08.104 user 0m26.727s 00:35:08.104 sys 0m4.446s 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.104 ************************************ 00:35:08.104 END TEST nvmf_digest_clean 00:35:08.104 ************************************ 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:08.104 ************************************ 00:35:08.104 START TEST nvmf_digest_error 00:35:08.104 ************************************ 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1196125 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1196125 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196125 ']' 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.104 13:15:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.104 [2024-12-15 13:15:15.943449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:08.104 [2024-12-15 13:15:15.943488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.364 [2024-12-15 13:15:16.022633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.364 [2024-12-15 13:15:16.043676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.364 [2024-12-15 13:15:16.043713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.364 [2024-12-15 13:15:16.043721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.364 [2024-12-15 13:15:16.043727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.364 [2024-12-15 13:15:16.043732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.364 [2024-12-15 13:15:16.044247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.364 [2024-12-15 13:15:16.128719] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.364 null0 00:35:08.364 [2024-12-15 13:15:16.215957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.364 [2024-12-15 13:15:16.240144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196283 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196283 /var/tmp/bperf.sock 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196283 ']' 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.364 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.623 [2024-12-15 13:15:16.293197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:08.623 [2024-12-15 13:15:16.293239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196283 ] 00:35:08.623 [2024-12-15 13:15:16.368698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.623 [2024-12-15 13:15:16.390511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.623 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.623 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:08.623 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.623 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.882 13:15:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.142 nvme0n1 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.142 13:15:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.401 Running I/O for 2 seconds... 00:35:09.401 [2024-12-15 13:15:17.141638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.141669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.141680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.154263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.154287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.154296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.165788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.165811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.165819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.174863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.174887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.174895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.185843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.185864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.185873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.194695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.194731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.205491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.205514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.205522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.215747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.215769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.215777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.224523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.224543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.224552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.233964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.233985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.243349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.243370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.243378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.252570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.252590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.252598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.261398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.261418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.261426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.401 [2024-12-15 13:15:17.270843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.401 [2024-12-15 13:15:17.270864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.401 [2024-12-15 13:15:17.270872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.402 [2024-12-15 13:15:17.280673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.402 [2024-12-15 13:15:17.280698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.402 [2024-12-15 13:15:17.280706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.402 [2024-12-15 13:15:17.290045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.402 [2024-12-15 13:15:17.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.402 [2024-12-15 13:15:17.290074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.402 [2024-12-15 13:15:17.299450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.402 [2024-12-15 13:15:17.299471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.402 [2024-12-15 13:15:17.299479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.661 [2024-12-15 13:15:17.307546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.661 [2024-12-15 13:15:17.307568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.661 [2024-12-15 13:15:17.307577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.661 [2024-12-15 13:15:17.319744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.661 [2024-12-15 13:15:17.319765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.661 [2024-12-15 13:15:17.319773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.661 [2024-12-15 13:15:17.329027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.661 [2024-12-15 13:15:17.329048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.661 [2024-12-15 13:15:17.329056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.338190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.338210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.338218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.348009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.348030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.348038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.356867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.366408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.366429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.366437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.375432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.375452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.375460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.383100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.383121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.383129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.392714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.392734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.392742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.401975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.401997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.402005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.411057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.411086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.420131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.420151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.420159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.430841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.430862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.430870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.440093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.440125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.449279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.449300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.449308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.458528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.458548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.458557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.467251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.467272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.467280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.476862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.476884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.476892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.487528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.487551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.487559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.498331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.498354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.498362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.507342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.517859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.517880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.517889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.530158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.530179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.539255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.539276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.539285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.551869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.662 [2024-12-15 13:15:17.561352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.662 [2024-12-15 13:15:17.561373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.662 [2024-12-15 13:15:17.561382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.571524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.571546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.571555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.581441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.581462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.581470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.594480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.594500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.606195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.606216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.606225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.618012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.618033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.626570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.626591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.626599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.638237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.638257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.649348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.649376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.660749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.660770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.660778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.669609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.669629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.669637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.680700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.680720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.680729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.689282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.689303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.698726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.698747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.698755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.708593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.708626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.717479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.717500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.717508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.727682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.727703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.727711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.736798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.736819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.736832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.746046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.746067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.746075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.756244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.756272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.766027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.766047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.775807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.775831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.775839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.785590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.785609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.785617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.795673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.795695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.795703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.804331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.804352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.804360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.814250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.814271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.923 [2024-12-15 13:15:17.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.923 [2024-12-15 13:15:17.824858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:09.923 [2024-12-15 13:15:17.824878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.924 [2024-12-15 13:15:17.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.833482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.833504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.833512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.844059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.844078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.844087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.856051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.856070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.856078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.864370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.864389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.864396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.875475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.875495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.875507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.884678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.884698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.183 [2024-12-15 13:15:17.884707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.183 [2024-12-15 13:15:17.894728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.183 [2024-12-15 13:15:17.894749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.894757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.906165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.906184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.918809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.918836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.918846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.926746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.926767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.926776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.938222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.938243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.938251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.950472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.950494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.961567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.961587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.961595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.969951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.969975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.969983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.981436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.981456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.981464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:17.992821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:17.992847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:17.992856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.002994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.003015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.003023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.011416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.011436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.011444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.023184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.023204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.023212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.031734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.031755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.031763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.044542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.044563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.044570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.052862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.052895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.064819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.064844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.074592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.074611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.184 [2024-12-15 13:15:18.083622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.184 [2024-12-15 13:15:18.083642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.184 [2024-12-15 13:15:18.083650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.096443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.096464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.096473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.104476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.104504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.116678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.116706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.129324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.129344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.129352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 25306.00 IOPS, 98.85 MiB/s [2024-12-15T12:15:18.351Z] [2024-12-15 13:15:18.140426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.140447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.140456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.148760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.148792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.160486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.160507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.160516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.172314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.172335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.172344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.183132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.183154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.183162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.194022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.194042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.194050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.203852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.203874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.203883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.213024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.223025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.223046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.223054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.234034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.444 [2024-12-15 13:15:18.234054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.444 [2024-12-15 13:15:18.234062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.444 [2024-12-15 13:15:18.246089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.246109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.246116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.254227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.254247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.254255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.264990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.265011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.265019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.276681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.276702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.276710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.287623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.287644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.287651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.296090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.296110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.296118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.306297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.306318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.306326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.314933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.314954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.314962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.325947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.325967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.325979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.334347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.334367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.334375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.445 [2024-12-15 13:15:18.344334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.445 [2024-12-15 13:15:18.344354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.445 [2024-12-15 13:15:18.344362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.354585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.354605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.354614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.362850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.362871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.371964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.371983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.371992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.381118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.381139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.381147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.392091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.392112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.392120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.401369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.401390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.401403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.412891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.412912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.412920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.424377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.424405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.434749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.434777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.443855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.443874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.443883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.455660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.455680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.455688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.463222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.463241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.463250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.474516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.474537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.474545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.486811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.486838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.486846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.496649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.496670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.496682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.506588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.506609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.506618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.515566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.515588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.515596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.525419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.525441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.525449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.534626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.534647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.534656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.544725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.544753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.554465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.554486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.554494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.562448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.562468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.562476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.574391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.574410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.574419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.582768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.582793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.582802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.705 [2024-12-15 13:15:18.594967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.705 [2024-12-15 13:15:18.594989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.705 [2024-12-15 13:15:18.594997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.706 [2024-12-15 13:15:18.606214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.706 [2024-12-15 13:15:18.606236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.706 [2024-12-15 13:15:18.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.965 [2024-12-15 13:15:18.616080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.965 [2024-12-15 13:15:18.616102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.965 [2024-12-15 13:15:18.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.965 [2024-12-15 13:15:18.625487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.965 [2024-12-15 13:15:18.625508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.965 [2024-12-15 13:15:18.625516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.965 [2024-12-15 13:15:18.634363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.634388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.634398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.644890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.644910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.644919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.654461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.654482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.654490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.665436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.665458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.665466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.674541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.674563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.674572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.683672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.683693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.692630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.692652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.692661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.700844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.700866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.700874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.712726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.712747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.712756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.724693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.724715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.724724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.733273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.733294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.733302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.744225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.744245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.754684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.754705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.763167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.763189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.763198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.773673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.773694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.773701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.781558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.781579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.792306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.792326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.792334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.802038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.802060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.802067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.810474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.810495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.810504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.819717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.819738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.819746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.829299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.829321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.829329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.839037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.839058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.839066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.847553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.847574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.847582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.857519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.857548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.966 [2024-12-15 13:15:18.869670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:10.966 [2024-12-15 13:15:18.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.966 [2024-12-15 13:15:18.869699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.880783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.880805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.880813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.889505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.889526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.901512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.901534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.901543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.910933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.910955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.910964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.919268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.919289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.919301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.929168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.929190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.929198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.939152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.939174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.226 [2024-12-15 13:15:18.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.226 [2024-12-15 13:15:18.947948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.226 [2024-12-15 13:15:18.947969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.947978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:18.957179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:18.957200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:18.967065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:18.967087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.967095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:18.975556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:18.975576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.975585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:18.985975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:18.985996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.986004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:18.996802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:18.996822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:18.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.007940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.007965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.007973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.020104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.020134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.028977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.028999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.029007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.036617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.036637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.036645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.047725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.047746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.047754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.059372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.059393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.059401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.067874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.067894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.067903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.080431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.080451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.080459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.092750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.092771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.104744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.104766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.104774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.112935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.112955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.112963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.227 [2024-12-15 13:15:19.124303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.227 [2024-12-15 13:15:19.124324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.227 [2024-12-15 13:15:19.124332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.486 25333.00 IOPS, 98.96 MiB/s [2024-12-15T12:15:19.393Z] [2024-12-15 13:15:19.136170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ac6e0) 00:35:11.486 [2024-12-15 13:15:19.136191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.486 [2024-12-15 13:15:19.136199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.486 00:35:11.486 Latency(us) 00:35:11.486 [2024-12-15T12:15:19.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.486 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:11.486 nvme0n1 : 2.01 25329.21 98.94 0.00 0.00 5048.57 2418.59 16477.62 00:35:11.486 [2024-12-15T12:15:19.393Z] =================================================================================================================== 00:35:11.486 [2024-12-15T12:15:19.393Z] Total : 25329.21 98.94 0.00 0.00 5048.57 2418.59 16477.62 00:35:11.486 { 00:35:11.486 "results": [ 00:35:11.486 { 00:35:11.486 "job": "nvme0n1", 00:35:11.486 "core_mask": "0x2", 00:35:11.486 "workload": "randread", 00:35:11.486 "status": "finished", 00:35:11.486 "queue_depth": 128, 00:35:11.486 "io_size": 4096, 00:35:11.486 "runtime": 2.005353, 00:35:11.486 "iops": 25329.20637912627, 00:35:11.486 "mibps": 98.94221241846199, 00:35:11.486 "io_failed": 0, 00:35:11.486 "io_timeout": 0, 00:35:11.486 "avg_latency_us": 5048.5728814614395, 00:35:11.486 "min_latency_us": 2418.5904761904762, 00:35:11.486 "max_latency_us": 16477.62285714286 00:35:11.486 } 00:35:11.486 ], 00:35:11.486 "core_count": 1 00:35:11.486 } 00:35:11.486 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.487 | .driver_specific 00:35:11.487 | .nvme_error 00:35:11.487 | .status_code 00:35:11.487 | .command_transient_transport_error' 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196283 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196283 ']' 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196283 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.487 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196283 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196283' 00:35:11.746 killing process with pid 1196283 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196283 00:35:11.746 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.746 00:35:11.746 Latency(us) 00:35:11.746 [2024-12-15T12:15:19.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.746 [2024-12-15T12:15:19.653Z] =================================================================================================================== 00:35:11.746 [2024-12-15T12:15:19.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196283 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196754 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196754 /var/tmp/bperf.sock 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196754 ']' 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.746 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.746 [2024-12-15 13:15:19.612970] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:11.746 [2024-12-15 13:15:19.613022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196754 ] 00:35:11.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:11.746 Zero copy mechanism will not be used. 00:35:12.005 [2024-12-15 13:15:19.689083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.005 [2024-12-15 13:15:19.711353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.005 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.005 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:12.005 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.005 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.263 13:15:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.522 nvme0n1 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.522 13:15:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.783 Zero copy mechanism will not be used. 00:35:12.783 Running I/O for 2 seconds... 00:35:12.783 [2024-12-15 13:15:20.453467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.453503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.453514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.459181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.459208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.459217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.464952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.464976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.464985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.470256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.470283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.470291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.475403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.475434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.480515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.480537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.480545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.485695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.485716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.485725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.490925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.490947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.490955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.496020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.496042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.496050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.501160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.501182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.501190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.506391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.506418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.506426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.511970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.511992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.512001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.517353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.517375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.517384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.522727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.522750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.522758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.527954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.527977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.533218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.533240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.533248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.538323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.538345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.538353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.543719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.543742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.543750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.548896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.548917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.548925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.554088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.554110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.554118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.559245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.559266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.559277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.783 [2024-12-15 13:15:20.564459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.783 [2024-12-15 13:15:20.564480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.783 [2024-12-15 13:15:20.564488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.569672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.569694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.569702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.574877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.574900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.574908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.580024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.580046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.580055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.585166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.585189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.585197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.590335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.590357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.590365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.595497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.595519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.595527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.600654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.600675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.600683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.605726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.605751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.605759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.610876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.610906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.616004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.616026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.616034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.621173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.621197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.621205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.626392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.626414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.626422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.631622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.631644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.631652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.636804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.636830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.636838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.642014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.642036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.642044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.647210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.647233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.647242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.652356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.652377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.652386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.657508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.657529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.657537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.662655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.662677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.662686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.667816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.667852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.673071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.673093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.673102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.678236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.678259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.678268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.784 [2024-12-15 13:15:20.683521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:12.784 [2024-12-15 13:15:20.683543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.784 [2024-12-15 13:15:20.683551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.688633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.688656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.693818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.693848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.693860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.698939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.698960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.698968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.704074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.704096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.704104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.709280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.709301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.709311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.714497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.714519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.714527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.719682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.719705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.719713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.725772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.725795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.725804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.733075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.733099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.733109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.740612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.740636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.045 [2024-12-15 13:15:20.740645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.045 [2024-12-15 13:15:20.748090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.045 [2024-12-15 13:15:20.748115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.748124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.755442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.755465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.755473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.763284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.763307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.763316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.770717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.770740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.770749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.778573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.778596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.778605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.786226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.786249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.786258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.794045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.794076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.801880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.801911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.809450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.809473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.809485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.817175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.817198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.817207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.824729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.824752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.824761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.832407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.832430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.832438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.839503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.839526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.845475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.845501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.845510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.850817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.850846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.850855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.856255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.856277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.856286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.861576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.861598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.861607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.867045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.867071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.867079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.872409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.872430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.872439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.877792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.877815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.877823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.883245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.883267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.888570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.888592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.888599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.893895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.893917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.893925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.899261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.899289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.904539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.904561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.904569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.909926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.909948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.909956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.915304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.046 [2024-12-15 13:15:20.915326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.046 [2024-12-15 13:15:20.915334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.046 [2024-12-15 13:15:20.920551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.920572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.920580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.047 [2024-12-15 13:15:20.925741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.925763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.047 [2024-12-15 13:15:20.931211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.931233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.931242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.047 [2024-12-15 13:15:20.936998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.937025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.937033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.047 [2024-12-15 13:15:20.942469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.942491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.942499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.047 [2024-12-15 13:15:20.947801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.047 [2024-12-15 13:15:20.947822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.047 [2024-12-15 13:15:20.947837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.953250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.953272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.953281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.958563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.958585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.958596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.964382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.964404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.964413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.970261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.970283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.970292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.975557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.975579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.975588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.979513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.979535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.979543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.987008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.307 [2024-12-15 13:15:20.987030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.307 [2024-12-15 13:15:20.987038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.307 [2024-12-15 13:15:20.993435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:20.993458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:20.993466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.000280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.000302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.000311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.005988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.006010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.011704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.011728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.011736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.017005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.017025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.017033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.022175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.022196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.022204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.027277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.027298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.027306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.032359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.032381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.032388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.037520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.037542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.037550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.042744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.042765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.042773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.048118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.048139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.048147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.053542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.053563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.058935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.058956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.058964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.064314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.064335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.069515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.069536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.069544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.074858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.074878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.074887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.080147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.080169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.080177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.085395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.085418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.085427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.090706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.090728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.090736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.095640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.095662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.095670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.100676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.100701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.100709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.106032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.106053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.106061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.111341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.111363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.111371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.116744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.116765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.116773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.122115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.122137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.122144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.127532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.127553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.127561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.132752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.132773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-12-15 13:15:21.132781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.308 [2024-12-15 13:15:21.138122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.308 [2024-12-15 13:15:21.138144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.138152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.143460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.143481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.143490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.148830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.148851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.148860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.154360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.154380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.154389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.159932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.159952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.159961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.165252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.165273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.165281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.170660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.170690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.176091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.176112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.176120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.181443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.181464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.181471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.186848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.186869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.186877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.192485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.192505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.192517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.199241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.199272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.309 [2024-12-15 13:15:21.206497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.309 [2024-12-15 13:15:21.206519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-12-15 13:15:21.206528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.213277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.213301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.213310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.219831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.219854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.219863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.226012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.226033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.226042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.232429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.232451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.232459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.240133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.240155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.240163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.247055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.247077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.247085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.253437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.569 [2024-12-15 13:15:21.253462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.569 [2024-12-15 13:15:21.253470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.569 [2024-12-15 13:15:21.259349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.259371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.259379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.265030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.265052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.265061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.269717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.269738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.269746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.275117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.275138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.275146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.280570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.280592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.280600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.286100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.286122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.286130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.291273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.291302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.296547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.296569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.296578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.301863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.301884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.301892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.306996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.307017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.307025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.312146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.312168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.312176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.317539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.317560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.317568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.322866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.322887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.322894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.328147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.328168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.328176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.333637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.333659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.333667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.339351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.339373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.339381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.344903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.344929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.344938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.350311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.350332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.350340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.355646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.355667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.355675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.361022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.361044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.361052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.366266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.366287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.366296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.371623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.371645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.371653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.377416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.377438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.377446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.382776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.382798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.382806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.388388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.388411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.388419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.393893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.393914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.399629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.399652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.570 [2024-12-15 13:15:21.404986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.570 [2024-12-15 13:15:21.405009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.570 [2024-12-15 13:15:21.405017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.410260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.410282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.410290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.415265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.415286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.415294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.420566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.420588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.420595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.425835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.425856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.425865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.431445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.431467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.431475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.436797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.436819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.436835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.442120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.442143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.442151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.449114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.449137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.449145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.571 5511.00 IOPS, 688.88 MiB/s [2024-12-15T12:15:21.478Z] [2024-12-15 13:15:21.456255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.456279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.462206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.462228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.462237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.467267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.467290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.467298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.571 [2024-12-15 13:15:21.472901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.571 [2024-12-15 13:15:21.472925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.571 [2024-12-15 13:15:21.472934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.478694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.478718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.478727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.484120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.484142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.489930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.489956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.489964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.495564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.495591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.495599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.501096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.501118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.501126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.506206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.506228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.506236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.509190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.509211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.509219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.514383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.514412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.519496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.519517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.519525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.524616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.524637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.524645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.529803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.529830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.529838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.534956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.534977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.534985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.540106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.540127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.540134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.545192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.545212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.545220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.550144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.550166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.550174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.555180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.555201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.560304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.560324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.560332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.565425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.565445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.565453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.571218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.571239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.571247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.575629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.575654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.575662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.580666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.580687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.580695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.585712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.585733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.585741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.590817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.590848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.590856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.832 [2024-12-15 13:15:21.595927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.832 [2024-12-15 13:15:21.595949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.832 [2024-12-15 13:15:21.595957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.600955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.600977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.600985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.606053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.606074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.606082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.611148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.611177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.616223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.616244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.616252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.621486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.621506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.621514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.626716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.626736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.626745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.632396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.632417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.637768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.637788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.637796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.643087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.643108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.648348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.648368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.653632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.653652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.653660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.658426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.658448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.658456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.663505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.663527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.663539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.668579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.668601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.668609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.673806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.673833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.678887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.678908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.678916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.684030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.684052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.684060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.689137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.689160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.689168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.694336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.694358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.694365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.699634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.699656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.699664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.704841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.704862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.704870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.710008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.710034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.710042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.715337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.715361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.715369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.720516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.720539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.720547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.725926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.725949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.833 [2024-12-15 13:15:21.725957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.833 [2024-12-15 13:15:21.732171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:13.833 [2024-12-15 13:15:21.732194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.834 [2024-12-15 13:15:21.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.739704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.739729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.739738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.746344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.746367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.746375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.752800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.752837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.758880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.758902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.758910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.765302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.765324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.765333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.772892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.772915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.772923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.779158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.779179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.785199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.791391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.791415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.791424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.797136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.797160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.797168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.802495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.802517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.802526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.807838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.807859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.807867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.813139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.813165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.813173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.819345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.819367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.819376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.825616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.825638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.825646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.832164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.832186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.838695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.838719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.838728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.844936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.844959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.844967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.851475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.851498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.851507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.857596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.857619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.857628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.863745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.863768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.863776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.870150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.876058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.876080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.876088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.883202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.883225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.883237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.890663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.094 [2024-12-15 13:15:21.890686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.094 [2024-12-15 13:15:21.890694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.094 [2024-12-15 13:15:21.898488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.898511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.898520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.905343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.905365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.905375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.911677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.911699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.911708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.918038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.918060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.918069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.925719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.925742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.925755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.933386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.933408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.933417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.939336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.939358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.946422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.946444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.946452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.952620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.952642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.952651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.957735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.957756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.957765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.962991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.963013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.963021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.968237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.968257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.968265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.973395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.973416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.973425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.978777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.978802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.978811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.984031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.984053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.984061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.989200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.989221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.989229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.095 [2024-12-15 13:15:21.994362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.095 [2024-12-15 13:15:21.994384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.095 [2024-12-15 13:15:21.994392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.355 [2024-12-15 13:15:21.999550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.355 [2024-12-15 13:15:21.999573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.355 [2024-12-15 13:15:21.999581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.355 [2024-12-15 13:15:22.004762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.355 [2024-12-15 13:15:22.004783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.355 [2024-12-15 13:15:22.004791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.355 [2024-12-15 13:15:22.009949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.355 [2024-12-15 13:15:22.009972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.355 [2024-12-15 13:15:22.009980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.015135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.015164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.020190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.020212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.020219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.025282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.025303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.025311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.030436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.030459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.030468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.035617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.035639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.035648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.040830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.040850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.040858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.045962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.045983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.051145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.051166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.051174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.056271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.056292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.056300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.061452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.061473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.066593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.066614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.066626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.071796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.071817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.071831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.077047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.077067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.082199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.082220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.082228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.087279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.087300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.087308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.092412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.092433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.092441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.097513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.097534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.097542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.102631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.102652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.102660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.107773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.107795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.107803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.112888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.112909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.112917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.118021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.118042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.118050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.123180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.123201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.123209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.128245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.128266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.128274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.133434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.133456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.133464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.138579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.138601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.143687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.143708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.143716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.148724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.356 [2024-12-15 13:15:22.148745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.356 [2024-12-15 13:15:22.148753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.356 [2024-12-15 13:15:22.153853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.153874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.153885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.158950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.158971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.158979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.164098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.164119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.164127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.169080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.169101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.169110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.174156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.174186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.179301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.179322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.184383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.184404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.184412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.189485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.189506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.189514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.194667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.194688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.194696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.199838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.199862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.199870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.204931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.204952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.204960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.210238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.210260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.210268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.215852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.215873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.215881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.221103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.221125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.221133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.226628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.226649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.226657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.232112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.232134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.232142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.237725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.237747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.237755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.242873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.242902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.247965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.247987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.247994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.253166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.253187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.253195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.357 [2024-12-15 13:15:22.258308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.357 [2024-12-15 13:15:22.258329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.357 [2024-12-15 13:15:22.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.263495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.263517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.263525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.268755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.268777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.268785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.273929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.273951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.273959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.278984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.279005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.279013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.284159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.284181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.289252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.289274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.289286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.294400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.294420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.294428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.299516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.299537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.299545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.304635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.304656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.304665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.309710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.309731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.309740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.314796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.314818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.314833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.319938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.319960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.319968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.325087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.325109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.325117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.330208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.330230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.330238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.335358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.335379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.335387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.340455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.340476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.340485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.345574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.345603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.350762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.350782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.350790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.355939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.355961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.355969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.361085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.617 [2024-12-15 13:15:22.361107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.617 [2024-12-15 13:15:22.361114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.617 [2024-12-15 13:15:22.366198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.366219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.366228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.371367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.371389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.371397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.376492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.376513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.376525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.381674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.381696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.381704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.386868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.386890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.386898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.391858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.391879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.391887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.394592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.394620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.399838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.399859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.399867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.404904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.404925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.404933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.409723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.409745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.409753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.414966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.414988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.414996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.420147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.420172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.420180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.425279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.425302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.425310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.430088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.430109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.430117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.435194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.435215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.435223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.440249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.440271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.440279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.445945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.445968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.445976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.618 [2024-12-15 13:15:22.451346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.451368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.451376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.618 5624.00 IOPS, 703.00 MiB/s [2024-12-15T12:15:22.525Z] [2024-12-15 13:15:22.457832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2547130) 00:35:14.618 [2024-12-15 13:15:22.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.618 [2024-12-15 13:15:22.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:14.618 00:35:14.618 Latency(us) 00:35:14.618 [2024-12-15T12:15:22.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.618 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:14.618 nvme0n1 : 2.00 5624.45 703.06 0.00 0.00 2841.62 729.48 8176.40 00:35:14.618 [2024-12-15T12:15:22.525Z] =================================================================================================================== 00:35:14.618 [2024-12-15T12:15:22.525Z] Total : 5624.45 703.06 0.00 0.00 2841.62 729.48 8176.40 00:35:14.618 { 00:35:14.618 "results": [ 00:35:14.618 { 00:35:14.618 "job": "nvme0n1", 00:35:14.618 "core_mask": "0x2", 00:35:14.618 "workload": "randread", 00:35:14.618 "status": "finished", 00:35:14.618 "queue_depth": 16, 00:35:14.618 "io_size": 131072, 00:35:14.618 "runtime": 2.002685, 00:35:14.618 "iops": 5624.449176979904, 00:35:14.618 "mibps": 703.056147122488, 00:35:14.618 "io_failed": 0, 00:35:14.618 "io_timeout": 0, 00:35:14.618 "avg_latency_us": 2841.62025974026, 00:35:14.618 "min_latency_us": 729.4780952380952, 00:35:14.618 "max_latency_us": 8176.396190476191 00:35:14.618 } 00:35:14.618 ], 00:35:14.618 "core_count": 1 00:35:14.618 } 00:35:14.618 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.618 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.618 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.618 | .driver_specific 00:35:14.618 | .nvme_error 00:35:14.618 | .status_code 00:35:14.618 | .command_transient_transport_error' 00:35:14.618 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 364 > 0 )) 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196754 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196754 ']' 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196754 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196754 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196754' 00:35:14.877 killing process with pid 1196754 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196754 00:35:14.877 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.877 00:35:14.877 Latency(us) 00:35:14.877 [2024-12-15T12:15:22.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.877 [2024-12-15T12:15:22.784Z] =================================================================================================================== 00:35:14.877 [2024-12-15T12:15:22.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.877 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196754 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197213 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197213 /var/tmp/bperf.sock 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197213 ']' 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.135 13:15:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.135 [2024-12-15 13:15:22.931960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:15.136 [2024-12-15 13:15:22.932008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197213 ] 00:35:15.136 [2024-12-15 13:15:23.006848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.136 [2024-12-15 13:15:23.029259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.395 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.654 nvme0n1 00:35:15.654 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:15.654 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.654 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.913 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.913 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.913 13:15:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.913 Running I/O for 2 seconds... 00:35:15.913 [2024-12-15 13:15:23.666429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef31b8 00:35:15.913 [2024-12-15 13:15:23.667183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.667211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.675906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee8088 00:35:15.913 [2024-12-15 13:15:23.676697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.676719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.686178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7538 00:35:15.913 [2024-12-15 13:15:23.687380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.687400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.694684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeaef0 00:35:15.913 [2024-12-15 13:15:23.695588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.695607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.703637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eebfd0 00:35:15.913 [2024-12-15 13:15:23.704518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.704536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.712633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:15.913 [2024-12-15 13:15:23.713532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.713551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.721620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0630 00:35:15.913 [2024-12-15 13:15:23.722480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.722499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.730585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede8a8 00:35:15.913 [2024-12-15 13:15:23.731493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.731512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.739845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee1b48 00:35:15.913 [2024-12-15 13:15:23.740506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.740526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.750138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efc998 00:35:15.913 [2024-12-15 13:15:23.751583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.751602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.756433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:15.913 [2024-12-15 13:15:23.757061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.757080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.768089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efef90 00:35:15.913 [2024-12-15 13:15:23.769514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.769534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.774423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4140 00:35:15.913 [2024-12-15 13:15:23.775040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.775060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.783814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0ff8 00:35:15.913 [2024-12-15 13:15:23.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.784581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.792317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eec408 00:35:15.913 [2024-12-15 13:15:23.793060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.793079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.801666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef4f40 00:35:15.913 [2024-12-15 13:15:23.802519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.913 [2024-12-15 13:15:23.802538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:15.913 [2024-12-15 13:15:23.811622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeb328 00:35:15.914 [2024-12-15 13:15:23.812585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.914 [2024-12-15 13:15:23.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.820732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:16.174 [2024-12-15 13:15:23.821655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.821679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.829714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee23b8 00:35:16.174 [2024-12-15 13:15:23.830696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.830716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.838677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0350 00:35:16.174 [2024-12-15 13:15:23.839658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.839678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.847618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efb048 00:35:16.174 [2024-12-15 13:15:23.848600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.848619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.856548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efc128 00:35:16.174 [2024-12-15 13:15:23.857535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.857555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.865518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee2c28 00:35:16.174 [2024-12-15 13:15:23.866485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.866505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.874447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee7c50 00:35:16.174 [2024-12-15 13:15:23.875496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.875515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.883434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede8a8 00:35:16.174 [2024-12-15 13:15:23.884435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.884454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.892424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0630 00:35:16.174 [2024-12-15 13:15:23.893409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.893429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.901336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:16.174 [2024-12-15 13:15:23.902240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.902260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.910339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:16.174 [2024-12-15 13:15:23.911236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.911256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.918794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef92c0 00:35:16.174 [2024-12-15 13:15:23.919800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.919820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.929061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5a90 00:35:16.174 [2024-12-15 13:15:23.930178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.938178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee6b70 00:35:16.174 [2024-12-15 13:15:23.939308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.939327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.947219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efe720 00:35:16.174 [2024-12-15 13:15:23.948344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.948364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.956177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef96f8 00:35:16.174 [2024-12-15 13:15:23.957200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.174 [2024-12-15 13:15:23.957218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.174 [2024-12-15 13:15:23.965159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8618 00:35:16.174 [2024-12-15 13:15:23.966202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:23.966221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:23.974364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6020 00:35:16.175 [2024-12-15 13:15:23.975561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:23.975580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:23.981695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efc998 00:35:16.175 [2024-12-15 13:15:23.982428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:23.982447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:23.990908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0630 00:35:16.175 [2024-12-15 13:15:23.991440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:23.991459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.000280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeaef0 00:35:16.175 [2024-12-15 13:15:24.000943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.000962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.009506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4578 00:35:16.175 [2024-12-15 13:15:24.010477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.010496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.018458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeee38 00:35:16.175 [2024-12-15 13:15:24.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.027391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8e88 00:35:16.175 [2024-12-15 13:15:24.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.028364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.035525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:16.175 [2024-12-15 13:15:24.036862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.036881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.043949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef3a28 00:35:16.175 [2024-12-15 13:15:24.044594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.044613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.053176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:16.175 [2024-12-15 13:15:24.053950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.064673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef20d8 00:35:16.175 [2024-12-15 13:15:24.066261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.066281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.175 [2024-12-15 13:15:24.071274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7970 00:35:16.175 [2024-12-15 13:15:24.072133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.175 [2024-12-15 13:15:24.072153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.080791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:16.435 [2024-12-15 13:15:24.081726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.081745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.089890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edfdc0 00:35:16.435 [2024-12-15 13:15:24.090445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.090465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.099286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee9e10 00:35:16.435 [2024-12-15 13:15:24.099952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.099971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.107535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef4298 00:35:16.435 [2024-12-15 13:15:24.108225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.108244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.117934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf988 00:35:16.435 [2024-12-15 13:15:24.119240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.119259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.126984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee99d8 00:35:16.435 [2024-12-15 13:15:24.128290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.128309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.135702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eecc78 00:35:16.435 [2024-12-15 13:15:24.137037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.137056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.144021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4de8 00:35:16.435 [2024-12-15 13:15:24.145008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.145028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.152860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eedd58 00:35:16.435 [2024-12-15 13:15:24.153842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.161808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0350 00:35:16.435 [2024-12-15 13:15:24.162790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.162808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.170735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:16.435 [2024-12-15 13:15:24.171759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.171778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.179911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef5378 00:35:16.435 [2024-12-15 13:15:24.180884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.180902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.188866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee9168 00:35:16.435 [2024-12-15 13:15:24.189858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.189877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.197767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef92c0 00:35:16.435 [2024-12-15 13:15:24.198750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.198768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.206965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1430 00:35:16.435 [2024-12-15 13:15:24.207730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.207749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.215389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee38d0 00:35:16.435 [2024-12-15 13:15:24.216733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.216751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.223053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5220 00:35:16.435 [2024-12-15 13:15:24.223771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.223790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.232400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efeb58 00:35:16.435 [2024-12-15 13:15:24.233240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.233259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.241689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0a68 00:35:16.435 [2024-12-15 13:15:24.242696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.242715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.251091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4de8 00:35:16.435 [2024-12-15 13:15:24.252172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.252191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.259358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeff18 00:35:16.435 [2024-12-15 13:15:24.260003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.260022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.268208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eebfd0 00:35:16.435 [2024-12-15 13:15:24.268840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.276553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efbcf0 00:35:16.435 [2024-12-15 13:15:24.277183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.435 [2024-12-15 13:15:24.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:16.435 [2024-12-15 13:15:24.286553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6cc8 00:35:16.435 [2024-12-15 13:15:24.287323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.287346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.297756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5220 00:35:16.436 [2024-12-15 13:15:24.299338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.299357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.304303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efe2e8 00:35:16.436 [2024-12-15 13:15:24.305163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.305182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.315132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef31b8 00:35:16.436 [2024-12-15 13:15:24.316380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.316400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.322484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eea680 00:35:16.436 [2024-12-15 13:15:24.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.323152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.331661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efdeb0 00:35:16.436 [2024-12-15 13:15:24.332509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.436 [2024-12-15 13:15:24.332528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:16.436 [2024-12-15 13:15:24.340293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee38d0 00:35:16.695 [2024-12-15 13:15:24.341077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.341097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.349786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2d80 00:35:16.695 [2024-12-15 13:15:24.350772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.350793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.359014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efc998 00:35:16.695 [2024-12-15 13:15:24.359989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.360010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.368356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2510 00:35:16.695 [2024-12-15 13:15:24.369353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.369372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.376915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eea680 00:35:16.695 [2024-12-15 13:15:24.377896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.377916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.386272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4578 00:35:16.695 [2024-12-15 13:15:24.387380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.387399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.395195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efcdd0 00:35:16.695 [2024-12-15 13:15:24.395961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.695 [2024-12-15 13:15:24.395981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:16.695 [2024-12-15 13:15:24.403378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4de8 00:35:16.696 [2024-12-15 13:15:24.404203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.404222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.412935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eea680 00:35:16.696 [2024-12-15 13:15:24.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.413927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.424014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4578 00:35:16.696 [2024-12-15 13:15:24.425480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.425500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.430632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0630 00:35:16.696 [2024-12-15 13:15:24.431311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.431330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.439986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:16.696 [2024-12-15 13:15:24.440757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.440776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.449355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef9b30 00:35:16.696 [2024-12-15 13:15:24.450340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.450360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.458848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efac10 00:35:16.696 [2024-12-15 13:15:24.459967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.468192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eddc00 00:35:16.696 [2024-12-15 13:15:24.469407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.469426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.476511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef5be8 00:35:16.696 [2024-12-15 13:15:24.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.477310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.485607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2948 00:35:16.696 [2024-12-15 13:15:24.486544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.486563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.494111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1868 00:35:16.696 [2024-12-15 13:15:24.494883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.494901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.503438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6890 00:35:16.696 [2024-12-15 13:15:24.504469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.504488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.512341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0788 00:35:16.696 [2024-12-15 13:15:24.512974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.512994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.521763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2d80 00:35:16.696 [2024-12-15 13:15:24.522549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.530031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:16.696 [2024-12-15 13:15:24.530955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.530975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.538904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef35f0 00:35:16.696 [2024-12-15 13:15:24.539667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.539687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.547933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7970 00:35:16.696 [2024-12-15 13:15:24.548683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.548702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.556944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf988 00:35:16.696 [2024-12-15 13:15:24.557684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.557703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.565115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee73e0 00:35:16.696 [2024-12-15 13:15:24.565925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.565944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.576868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efcdd0 00:35:16.696 [2024-12-15 13:15:24.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.578423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.583291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efa7d8 00:35:16.696 [2024-12-15 13:15:24.584151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.584171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:16.696 [2024-12-15 13:15:24.594379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eefae0 00:35:16.696 [2024-12-15 13:15:24.595719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.696 [2024-12-15 13:15:24.595738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.602910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7970 00:35:16.956 [2024-12-15 13:15:24.603914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.612007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:16.956 [2024-12-15 13:15:24.613078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.613097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.620904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efd640 00:35:16.956 [2024-12-15 13:15:24.621888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.621907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.629364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:16.956 [2024-12-15 13:15:24.630324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.630343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.639325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efb048 00:35:16.956 [2024-12-15 13:15:24.640342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.640363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.648534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:16.956 [2024-12-15 13:15:24.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.649818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:16.956 28119.00 IOPS, 109.84 MiB/s [2024-12-15T12:15:24.863Z] [2024-12-15 13:15:24.656971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6458 00:35:16.956 [2024-12-15 13:15:24.658196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.658215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.665152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efda78 00:35:16.956 [2024-12-15 13:15:24.666351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.666371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.674476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee23b8 00:35:16.956 [2024-12-15 13:15:24.675397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.675418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.683689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1430 00:35:16.956 [2024-12-15 13:15:24.684634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.684654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.694949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef4298 00:35:16.956 [2024-12-15 13:15:24.696375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.696395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.703114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efd208 00:35:16.956 [2024-12-15 13:15:24.704113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:16.956 [2024-12-15 13:15:24.713063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eee5c8 00:35:16.956 [2024-12-15 13:15:24.714125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.956 [2024-12-15 13:15:24.714145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.722991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee01f8 00:35:16.957 [2024-12-15 13:15:24.724396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.724415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.729678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed4e8 00:35:16.957 [2024-12-15 13:15:24.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.730374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.740854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efe720 00:35:16.957 [2024-12-15 13:15:24.741888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.741908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.750583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0ff8 00:35:16.957 [2024-12-15 13:15:24.751892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.751912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.760185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:16.957 [2024-12-15 13:15:24.761609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.761631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.766892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef5be8 00:35:16.957 [2024-12-15 13:15:24.767578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.767597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.778833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf988 00:35:16.957 [2024-12-15 13:15:24.780237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.780256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.785417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede038 00:35:16.957 [2024-12-15 13:15:24.786125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.786144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.794794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2948 00:35:16.957 [2024-12-15 13:15:24.795614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.795633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.805926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6cc8 00:35:16.957 [2024-12-15 13:15:24.807220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.807240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.814997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef3a28 00:35:16.957 [2024-12-15 13:15:24.816297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.816316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.823465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee1b48 00:35:16.957 [2024-12-15 13:15:24.824668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.832190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edfdc0 00:35:16.957 [2024-12-15 13:15:24.833359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.833378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.841278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee95a0 00:35:16.957 [2024-12-15 13:15:24.842452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.842472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.850230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7100 00:35:16.957 [2024-12-15 13:15:24.850959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.850979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:16.957 [2024-12-15 13:15:24.859434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee84c0 00:35:16.957 [2024-12-15 13:15:24.860424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.957 [2024-12-15 13:15:24.860444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.867816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eee190 00:35:17.217 [2024-12-15 13:15:24.868869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.868889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.878427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2948 00:35:17.217 [2024-12-15 13:15:24.879965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.879984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.884734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efd208 00:35:17.217 [2024-12-15 13:15:24.885429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.896061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4140 00:35:17.217 [2024-12-15 13:15:24.897459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.897479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.904319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edece0 00:35:17.217 [2024-12-15 13:15:24.905270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.905290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.912754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eff3c8 00:35:17.217 [2024-12-15 13:15:24.913771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.913790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.921750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edece0 00:35:17.217 [2024-12-15 13:15:24.922723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.922743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.931102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1868 00:35:17.217 [2024-12-15 13:15:24.932327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.932347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.940742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5220 00:35:17.217 [2024-12-15 13:15:24.942051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.942071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.949256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5658 00:35:17.217 [2024-12-15 13:15:24.950314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.950335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.958499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8a50 00:35:17.217 [2024-12-15 13:15:24.959492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.959511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.968727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee27f0 00:35:17.217 [2024-12-15 13:15:24.970169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.217 [2024-12-15 13:15:24.975303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee88f8 00:35:17.217 [2024-12-15 13:15:24.976034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.217 [2024-12-15 13:15:24.976053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:24.984675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eea248 00:35:17.218 [2024-12-15 13:15:24.985537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:24.985557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:24.995926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef81e0 00:35:17.218 [2024-12-15 13:15:24.997246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:24.997268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.005113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:17.218 [2024-12-15 13:15:25.006471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.006490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.013581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf550 00:35:17.218 [2024-12-15 13:15:25.014664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.014684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.022638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef7100 00:35:17.218 [2024-12-15 13:15:25.023631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.023651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.031117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede8a8 00:35:17.218 [2024-12-15 13:15:25.032107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.032127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.042233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef2d80 00:35:17.218 [2024-12-15 13:15:25.043818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.043841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.048709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8e88 00:35:17.218 [2024-12-15 13:15:25.049565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.049584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.058131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0630 00:35:17.218 [2024-12-15 13:15:25.059147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.059166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.067333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eebfd0 00:35:17.218 [2024-12-15 13:15:25.067872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.067891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.075728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eebfd0 00:35:17.218 [2024-12-15 13:15:25.076246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.076269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.085157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8e88 00:35:17.218 [2024-12-15 13:15:25.085685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.085704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.093919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef8618 00:35:17.218 [2024-12-15 13:15:25.094755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.094774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.103166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef5378 00:35:17.218 [2024-12-15 13:15:25.104037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.104055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.218 [2024-12-15 13:15:25.114188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef3a28 00:35:17.218 [2024-12-15 13:15:25.115548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.218 [2024-12-15 13:15:25.115567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.477 [2024-12-15 13:15:25.123390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eea680 00:35:17.477 [2024-12-15 13:15:25.124712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-15 13:15:25.124743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.477 [2024-12-15 13:15:25.129639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efc998 00:35:17.477 [2024-12-15 13:15:25.130283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-15 13:15:25.130301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.477 [2024-12-15 13:15:25.139046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eebfd0 00:35:17.477 [2024-12-15 13:15:25.139787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.477 [2024-12-15 13:15:25.139806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.477 [2024-12-15 13:15:25.148369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede470 00:35:17.477 [2024-12-15 13:15:25.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.149296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.158187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee1b48 00:35:17.478 [2024-12-15 13:15:25.159121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.159140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.168290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef5be8 00:35:17.478 [2024-12-15 13:15:25.169650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.169669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.177667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efe720 00:35:17.478 [2024-12-15 13:15:25.179187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.179206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.184163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee9e10 00:35:17.478 [2024-12-15 13:15:25.184975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.184994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.194193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee5a90 00:35:17.478 [2024-12-15 13:15:25.194877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.194896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.204295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:17.478 [2024-12-15 13:15:25.205559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.205584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.213644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf550 00:35:17.478 [2024-12-15 13:15:25.215057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.215076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.222694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef46d0 00:35:17.478 [2024-12-15 13:15:25.224104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.224123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.229684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef270 00:35:17.478 [2024-12-15 13:15:25.230471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.230491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.238661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eecc78 00:35:17.478 [2024-12-15 13:15:25.239507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.239526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.247147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef3e60 00:35:17.478 [2024-12-15 13:15:25.247967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.247985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.257993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efda78 00:35:17.478 [2024-12-15 13:15:25.259181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.265920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:17.478 [2024-12-15 13:15:25.267202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.267221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.273549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef81e0 00:35:17.478 [2024-12-15 13:15:25.274228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.274247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.284264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee6b70 00:35:17.478 [2024-12-15 13:15:25.285421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.285440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.292562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf550 00:35:17.478 [2024-12-15 13:15:25.293280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.293299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.303574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef6cc8 00:35:17.478 [2024-12-15 13:15:25.305105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.305124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.309880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4de8 00:35:17.478 [2024-12-15 13:15:25.310602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.310624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.318326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee1b48 00:35:17.478 [2024-12-15 13:15:25.319026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.319045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.329229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eee190 00:35:17.478 [2024-12-15 13:15:25.330325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.330344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.338291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efb480 00:35:17.478 [2024-12-15 13:15:25.339292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.339311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.346729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edece0 00:35:17.478 [2024-12-15 13:15:25.347788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.347807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.355813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eec408 00:35:17.478 [2024-12-15 13:15:25.356876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.356895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.364987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee0a68 00:35:17.478 [2024-12-15 13:15:25.365708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.365727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.373404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eedd58 00:35:17.478 [2024-12-15 13:15:25.374704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.374723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.478 [2024-12-15 13:15:25.381767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:17.478 [2024-12-15 13:15:25.382400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.478 [2024-12-15 13:15:25.382420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.390958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeea00 00:35:17.738 [2024-12-15 13:15:25.391565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.391585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.399147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef9b30 00:35:17.738 [2024-12-15 13:15:25.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.399855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.408477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016edf550 00:35:17.738 [2024-12-15 13:15:25.409296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.409315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.419580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eedd58 00:35:17.738 [2024-12-15 13:15:25.420932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.420951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.428985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef4298 00:35:17.738 [2024-12-15 13:15:25.430393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.430412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.438298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4140 00:35:17.738 [2024-12-15 13:15:25.439866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.439884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.444757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef96f8 00:35:17.738 [2024-12-15 13:15:25.445474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.445493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.454175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef0ff8 00:35:17.738 [2024-12-15 13:15:25.455031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.455050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.462654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee8d30 00:35:17.738 [2024-12-15 13:15:25.463474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.463493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.471984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efeb58 00:35:17.738 [2024-12-15 13:15:25.472928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.472947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.481352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1430 00:35:17.738 [2024-12-15 13:15:25.482425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.482444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.490528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee38d0 00:35:17.738 [2024-12-15 13:15:25.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.491172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.501246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:17.738 [2024-12-15 13:15:25.502766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.502785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.507533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef9b30 00:35:17.738 [2024-12-15 13:15:25.508261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.508280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.518756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:17.738 [2024-12-15 13:15:25.520247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.524927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016efac10 00:35:17.738 [2024-12-15 13:15:25.525627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.525647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.534261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee3498 00:35:17.738 [2024-12-15 13:15:25.535080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.544231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef1ca0 00:35:17.738 [2024-12-15 13:15:25.545094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.545121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.553202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eefae0 00:35:17.738 [2024-12-15 13:15:25.554162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.554181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.562129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee4de8 00:35:17.738 [2024-12-15 13:15:25.563099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.563119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.571107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef4b08 00:35:17.738 [2024-12-15 13:15:25.572076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.572095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.580090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ede8a8 00:35:17.738 [2024-12-15 13:15:25.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.581049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.588755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef46d0 00:35:17.738 [2024-12-15 13:15:25.589627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.589647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.597833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ee1f80 00:35:17.738 [2024-12-15 13:15:25.598772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.598790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.607777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef92c0 00:35:17.738 [2024-12-15 13:15:25.608848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.738 [2024-12-15 13:15:25.608867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.738 [2024-12-15 13:15:25.616677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eeee38 00:35:17.739 [2024-12-15 13:15:25.617747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.739 [2024-12-15 13:15:25.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.739 [2024-12-15 13:15:25.625662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eef6a8 00:35:17.739 [2024-12-15 13:15:25.626752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.739 [2024-12-15 13:15:25.626771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.739 [2024-12-15 13:15:25.634881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016eed0b0 00:35:17.739 [2024-12-15 13:15:25.636048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.739 [2024-12-15 13:15:25.636067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.739 [2024-12-15 13:15:25.643419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef3a28 00:35:17.998 [2024-12-15 13:15:25.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-15 13:15:25.644650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.998 [2024-12-15 13:15:25.651872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122ddc0) with pdu=0x200016ef9f68 00:35:17.998 [2024-12-15 13:15:25.652688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-15 13:15:25.652707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.998 28185.50 IOPS, 110.10 MiB/s 00:35:17.998 Latency(us) 00:35:17.998 [2024-12-15T12:15:25.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.998 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.998 nvme0n1 : 2.01 28199.78 110.16 0.00 0.00 4533.31 1778.83 14667.58 00:35:17.998 [2024-12-15T12:15:25.905Z] =================================================================================================================== 00:35:17.998 [2024-12-15T12:15:25.905Z] Total : 28199.78 110.16 0.00 0.00 4533.31 1778.83 14667.58 00:35:17.998 { 00:35:17.998 "results": [ 00:35:17.998 { 00:35:17.998 "job": "nvme0n1", 00:35:17.998 "core_mask": "0x2", 00:35:17.998 "workload": "randwrite", 00:35:17.998 "status": "finished", 00:35:17.998 "queue_depth": 128, 00:35:17.998 "io_size": 4096, 00:35:17.998 "runtime": 2.005831, 00:35:17.998 "iops": 28199.783531115034, 00:35:17.998 "mibps": 110.1554044184181, 00:35:17.998 "io_failed": 0, 00:35:17.998 "io_timeout": 0, 00:35:17.998 "avg_latency_us": 4533.310113583939, 00:35:17.998 "min_latency_us": 1778.8342857142857, 00:35:17.998 "max_latency_us": 14667.580952380953 00:35:17.998 } 00:35:17.998 ], 00:35:17.998 "core_count": 1 00:35:17.998 } 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.998 | .driver_specific 00:35:17.998 | .nvme_error 00:35:17.998 | .status_code 00:35:17.998 | .command_transient_transport_error' 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197213 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197213 ']' 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197213 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.998 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197213 00:35:18.257 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.257 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.257 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197213' 00:35:18.257 killing process with pid 1197213 00:35:18.257 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197213 00:35:18.257 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.257 00:35:18.257 Latency(us) 00:35:18.257 [2024-12-15T12:15:26.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.257 [2024-12-15T12:15:26.164Z] =================================================================================================================== 00:35:18.257 [2024-12-15T12:15:26.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.257 13:15:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197213 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197840 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197840 /var/tmp/bperf.sock 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197840 ']' 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.257 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.257 [2024-12-15 13:15:26.147044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:18.257 [2024-12-15 13:15:26.147103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197840 ] 00:35:18.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.257 Zero copy mechanism will not be used. 00:35:18.516 [2024-12-15 13:15:26.208365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.516 [2024-12-15 13:15:26.231220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.516 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.516 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:18.516 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.516 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.775 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.034 nvme0n1 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.294 13:15:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.294 Zero copy mechanism will not be used. 00:35:19.294 Running I/O for 2 seconds... 00:35:19.294 [2024-12-15 13:15:27.046538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.046617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.046644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.051089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.051161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.051182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.055344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.055413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.055433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.059597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.059661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.059684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.063721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.063786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.063805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.067935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.067997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.068015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.072052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.072118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.072137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.076224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.076278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.076296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.080438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.080506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.080524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.084565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.084615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.084633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.088717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.088779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.088796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.092956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.093014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.093032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.097161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.097224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.097242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.294 [2024-12-15 13:15:27.101289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.294 [2024-12-15 13:15:27.101351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.294 [2024-12-15 13:15:27.101369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.105414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.105474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.105493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.109555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.109609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.113630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.113690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.117849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.117944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.117963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.122909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.123105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.123124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.129334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.129400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.129419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.134124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.134244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.138955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.139034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.139053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.144005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.144059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.144078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.148181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.148234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.148253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.152443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.152508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.152527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.156698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.156754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.160965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.161035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.165189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.165263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.169436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.169489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.169507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.173653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.173714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.173740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.177900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.177952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.177970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.182066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.182136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.182154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.186259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.186326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.186344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.190451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.190512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.190530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.194654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.194717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.194735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.295 [2024-12-15 13:15:27.198889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.295 [2024-12-15 13:15:27.198947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.295 [2024-12-15 13:15:27.198965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.203367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.203439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.203459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.208079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.208139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.208157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.213187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.213266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.213285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.218665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.218727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.218746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.223884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.223967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.229177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.229290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.229309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.233910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.233971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.233989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.238557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.238614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.238632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.243464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.243547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.243566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.248541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.248596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.248614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.253898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.253953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.253972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.259337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.259401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.259420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.264128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.264180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.264198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.269004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.269122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.269140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.556 [2024-12-15 13:15:27.273662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.556 [2024-12-15 13:15:27.273718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.556 [2024-12-15 13:15:27.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.278165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.278226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.278244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.282563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.282681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.282699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.287136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.287191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.287210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.292041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.292095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.292113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.296598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.296663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.296684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.301172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.301232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.301250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.305862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.305923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.305942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.310465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.310538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.310557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.315018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.315117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.315136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.319591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.319658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.319676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.324267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.324324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.324343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.328663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.328739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.332925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.332981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.332999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.337197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.337261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.337279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.341472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.341537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.345770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.345833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.345851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.351232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.351306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.351325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.355968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.356040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.356059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.360248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.360313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.360331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.364451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.364516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.364535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.368737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.368799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.368817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.373042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.373105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.373123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.377574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.377626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.377643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.382596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.382649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.382666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.388173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.388229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.392980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.393037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.393055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.397530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.397584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.397601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.557 [2024-12-15 13:15:27.402187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.557 [2024-12-15 13:15:27.402244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.557 [2024-12-15 13:15:27.402261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.406757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.406811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.406835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.411431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.411484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.411502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.415996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.416059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.416080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.420357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.420449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.420466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.425190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.425286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.425304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.429959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.430032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.430050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.435029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.435084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.435102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.440220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.440277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.445235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.445337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.445356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.449661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.449718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.449735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.454323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.454385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.454404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.558 [2024-12-15 13:15:27.458944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.558 [2024-12-15 13:15:27.459021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.558 [2024-12-15 13:15:27.459039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.463329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.463427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.463446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.467650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.467702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.467720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.471967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.472048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.476219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.476275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.480526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.480583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.480600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.485003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.485070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.485088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.489598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.489649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.489668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.493845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.493956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.493975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.498545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.498647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.498665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.503481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.503612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.503630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.508991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.509091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.509109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.513687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.513746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.518304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.518411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.522944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.523029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.523047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.527587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.527646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.532188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.532246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.532264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.536804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.536876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.536898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.541441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.541495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.541512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.546067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.546122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.546140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.550636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.550737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.550755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.555431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.555487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.560150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.560239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.565144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.565209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.819 [2024-12-15 13:15:27.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.819 [2024-12-15 13:15:27.569855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.819 [2024-12-15 13:15:27.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.569943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.574561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.574680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.574698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.579253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.579325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.579343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.583845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.583907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.583924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.588598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.588677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.588695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.593462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.593533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.593551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.598106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.598168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.598186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.602426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.602479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.602496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.606723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.606799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.606818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.611058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.611120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.611137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.615386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.615446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.615464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.619657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.619722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.623968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.624030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.624048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.628281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.628340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.628357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.632586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.632640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.632658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.636851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.636910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.636928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.641162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.641222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.641240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.645427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.645489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.645506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.649737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.649792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.649809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.654027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.654097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.654118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.658285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.658353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.658371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.662567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.662627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.662645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.667005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.667059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.667077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.671715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.671791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.671810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.675991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.676060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.676078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.680331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.680395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.680414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.684618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.684688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.684706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.688925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.688979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.820 [2024-12-15 13:15:27.688996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.820 [2024-12-15 13:15:27.693240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.820 [2024-12-15 13:15:27.693303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.693320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.697530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.697581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.697599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.701818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.701876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.701894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.706158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.706212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.706230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.710430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.710491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.710509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.714765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.714838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.714857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.719127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.719190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.719207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:19.821 [2024-12-15 13:15:27.723490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:19.821 [2024-12-15 13:15:27.723552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.821 [2024-12-15 13:15:27.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.727901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.727952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.727970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.732266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.732332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.732350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.736575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.736628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.736646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.740911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.740974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.740992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.745213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.745273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.081 [2024-12-15 13:15:27.749565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.081 [2024-12-15 13:15:27.749622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.081 [2024-12-15 13:15:27.749641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.753861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.753921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.753939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.758159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.758228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.758246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.762423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.762485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.762503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.766684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.766742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.766763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.771346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.771498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.771515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.776795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.776968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.776986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.783222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.783397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.783415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.789551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.789730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.789748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.796275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.796418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.802821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.802978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.802997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.809112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.809284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.809304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.815837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.815997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.816016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.822215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.822418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.822436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.828511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.828661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.828679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.834885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.835033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.835052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.841124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.841292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.841311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.847413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.847597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.847615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.854091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.854264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.854283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.860409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.860566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.860584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.866929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.867091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.872570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.872635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.872653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.876998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.877069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.877087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.881502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.881561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.881578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.886097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.886162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.886180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.890788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.890958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.890976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.896126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.896180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.896198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.900833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.900887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.900905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.082 [2024-12-15 13:15:27.905957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.082 [2024-12-15 13:15:27.906017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.082 [2024-12-15 13:15:27.906034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.911049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.911104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.916123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.916188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.916209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.920928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.920982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.921001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.926216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.926362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.926380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.930979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.931035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.931053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.936168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.941282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.941351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.941369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.946548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.946614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.946633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.951922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.951994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.952013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.959043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.959196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.959215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.966065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.966155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.966173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.972474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.972542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.972560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.979556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.979700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.979719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.083 [2024-12-15 13:15:27.986119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.083 [2024-12-15 13:15:27.986173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.083 [2024-12-15 13:15:27.986191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.343 [2024-12-15 13:15:27.992923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.343 [2024-12-15 13:15:27.993019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.343 [2024-12-15 13:15:27.993037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.343 [2024-12-15 13:15:27.999237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.343 [2024-12-15 13:15:27.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.343 [2024-12-15 13:15:27.999409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.343 [2024-12-15 13:15:28.005958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.343 [2024-12-15 13:15:28.006038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.006057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.012942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.013019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.013037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.019685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.019750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.019769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.026050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.026117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.026135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.032724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.032800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.032819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.038941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.039003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.039022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 6376.00 IOPS, 797.00 MiB/s [2024-12-15T12:15:28.251Z] [2024-12-15 13:15:28.044994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.045060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.045079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.049864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.049936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.054521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.054573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.054591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.059209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.059262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.059280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.063700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.063816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.068374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.068439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.068461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.073197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.073300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.073318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.077869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.077924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.077942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.082558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.082629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.082646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.087023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.087134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.087151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.091592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.091647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.091665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.096209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.096264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.096283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.101630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.101689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.101707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.106800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.106860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.106879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.112063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.112134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.112152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.116793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.116865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.116884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.121511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.121580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.121598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.126257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.126317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.344 [2024-12-15 13:15:28.126335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.344 [2024-12-15 13:15:28.130949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.344 [2024-12-15 13:15:28.131018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.131037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.135670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.135748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.140117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.140173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.140191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.144735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.144789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.144807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.149662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.149753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.149771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.154857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.154984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.155003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.159912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.159974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.159993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.165845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.165909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.165928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.171744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.171798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.171817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.179547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.179731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.179750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.185703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.185799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.185817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.191203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.191291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.191310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.196837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.196925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.196944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.201884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.201985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.202006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.207332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.207423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.207441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.211735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.211810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.211835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.216104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.216165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.216184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.220456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.220518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.224759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.224812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.224836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.229087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.229160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.233385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.233438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.233456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.237846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.237933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.237951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.242572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.242632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.242650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.345 [2024-12-15 13:15:28.247065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.345 [2024-12-15 13:15:28.247133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.345 [2024-12-15 13:15:28.247152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.251597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.251650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.251669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.256286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.256343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.256361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.261990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.262063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.266950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.267013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.267031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.271628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.271691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.271708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.276401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.276457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.276474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.281121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.281183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.606 [2024-12-15 13:15:28.281202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.606 [2024-12-15 13:15:28.285702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.606 [2024-12-15 13:15:28.285761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.285778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.290264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.290327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.294914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.294981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.294999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.299594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.299659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.299677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.304020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.304116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.304135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.308595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.308683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.308701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.313380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.313437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.313455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.318207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.318273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.318292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.323559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.323647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.323669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.329209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.329280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.335117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.335443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.335464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.342289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.342505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.342525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.348707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.348973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.348993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.355188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.355433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.355454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.360198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.360441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.360461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.364725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.364988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.365008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.368997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.369237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.369256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.373234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.373489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.373509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.377799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.378074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.378094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.383161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.383436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.383456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.389050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.389333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.389352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.394178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.394461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.394481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.399181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.399455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.399475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.404073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.404341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.404360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.409107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.409380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.414015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.414255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.418787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.419053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.419074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.607 [2024-12-15 13:15:28.423489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.607 [2024-12-15 13:15:28.423765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.607 [2024-12-15 13:15:28.423785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.428266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.428551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.433714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.434070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.434090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.439761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.440027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.440047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.444692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.444990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.445011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.449598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.449852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.449872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.454549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.454830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.454852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.459519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.459779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.459803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.464656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.464918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.464938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.469368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.469669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.474282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.474540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.474560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.478888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.479151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.484924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.485255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.485276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.490716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.490954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.490974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.495512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.495778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.495798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.500266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.500540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.500560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.505135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.505395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.608 [2024-12-15 13:15:28.510075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.608 [2024-12-15 13:15:28.510317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.608 [2024-12-15 13:15:28.510337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.869 [2024-12-15 13:15:28.514881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.869 [2024-12-15 13:15:28.515153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.869 [2024-12-15 13:15:28.515173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.869 [2024-12-15 13:15:28.519596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.869 [2024-12-15 13:15:28.519818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.869 [2024-12-15 13:15:28.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.869 [2024-12-15 13:15:28.524315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.869 [2024-12-15 13:15:28.524570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.869 [2024-12-15 13:15:28.524590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.869 [2024-12-15 13:15:28.529021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.529263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.529284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.533709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.533968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.538245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.538483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.538503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.542800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.543054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.543074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.547314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.547592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.551927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.552204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.552224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.556724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.556974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.556995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.561758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.562015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.562051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.567458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.567725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.567746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.573681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.573942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.579208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.579442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.579462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.583797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.584047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.584067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.588538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.588774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.588797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.593582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.593822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.593848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.598333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.598555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.598575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.603547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.603776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.603796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.608446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.608703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.613232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.613465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.613484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.617976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.618245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.623002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.623248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.628783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.629040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.629060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.633513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.633756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.633776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.638404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.638655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.643276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.643506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.643527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.648351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.648594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.648614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.653483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.653757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.658544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.870 [2024-12-15 13:15:28.658782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.870 [2024-12-15 13:15:28.658801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.870 [2024-12-15 13:15:28.663332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.663564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.663585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.668034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.668280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.668300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.673016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.673286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.678053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.678303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.683159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.683399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.683420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.688674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.688938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.688958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.693531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.693767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.693787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.698326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.698571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.698592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.703162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.703396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.708369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.708613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.708634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.713312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.713536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.713556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.718251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.718484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.718508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.722638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.722885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.722905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.727524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.727760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.732450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.732685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.732705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.737047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.737287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.737307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.741695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.741942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.741962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.745949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.750675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.750913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.750933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.755425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.755668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.755688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.760099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.760341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.760361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.764599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.769543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:20.871 [2024-12-15 13:15:28.769775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.871 [2024-12-15 13:15:28.769795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:20.871 [2024-12-15 13:15:28.774658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.132 [2024-12-15 13:15:28.774901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.132 [2024-12-15 13:15:28.774922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.132 [2024-12-15 13:15:28.779910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.132 [2024-12-15 13:15:28.780153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.780173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.785209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.785292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.785311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.789718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.789961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.789982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.794117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.794354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.794374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.798379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.798610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.798629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.802552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.802789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.802811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.806919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.807166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.807186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.811388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.811640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.811660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.815742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.816004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.816024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.820109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.820378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.824540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.824789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.824810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.829088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.829322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.833197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.833438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.833460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.837328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.837556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.837580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.841404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.841643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.841663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.845547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.845795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.845816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.849690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.849940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.854186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.854465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.854487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.860036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.860409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.860430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.865540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.865792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.870573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.870815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.875260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.875546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.875567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.880095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.880366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.880386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.884705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.884953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.884973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.889403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.889635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.889656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.894127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.894378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.133 [2024-12-15 13:15:28.894397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.133 [2024-12-15 13:15:28.898852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.133 [2024-12-15 13:15:28.899103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.899123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.903661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.903916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.903935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.908279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.908572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.908592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.913660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.913935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.919419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.919696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.919717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.924161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.924400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.924420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.928961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.929227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.934290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.934542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.934562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.939889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.940147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.940168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.945750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.946088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.946109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.953072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.953297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.953318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.958449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.958674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.958693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.962897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.963110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.963130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.966862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.967057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.967081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.970747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.970948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.975138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.975341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.975360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.980498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.980794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.980814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.985769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.986001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.986021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.990374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.990583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.990603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.994953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.995272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.995292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:28.999745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:28.999959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:28.999977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:29.004253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:29.004505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:29.004526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:29.008637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:29.008859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:29.008878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:29.013064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:29.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:29.013290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:29.017514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.134 [2024-12-15 13:15:29.017714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.134 [2024-12-15 13:15:29.017741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.134 [2024-12-15 13:15:29.021410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.135 [2024-12-15 13:15:29.021625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.135 [2024-12-15 13:15:29.021643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.135 [2024-12-15 13:15:29.025364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.135 [2024-12-15 13:15:29.025574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.135 [2024-12-15 13:15:29.025592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.135 [2024-12-15 13:15:29.029172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.135 [2024-12-15 13:15:29.029389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.135 [2024-12-15 13:15:29.029410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.135 [2024-12-15 13:15:29.033080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.135 [2024-12-15 13:15:29.033263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.135 [2024-12-15 13:15:29.033281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:21.135 [2024-12-15 13:15:29.037314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.135 [2024-12-15 13:15:29.037501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.135 [2024-12-15 13:15:29.037518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:21.394 [2024-12-15 13:15:29.041808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.394 [2024-12-15 13:15:29.041985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.394 [2024-12-15 13:15:29.042004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:21.394 6366.00 IOPS, 795.75 MiB/s [2024-12-15T12:15:29.301Z] [2024-12-15 13:15:29.047066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x122e2a0) with pdu=0x200016eff3c8 00:35:21.394 [2024-12-15 13:15:29.047217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.394 [2024-12-15 13:15:29.047235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:21.394 00:35:21.394 Latency(us) 00:35:21.394 [2024-12-15T12:15:29.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.394 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:21.394 nvme0n1 : 2.00 6364.13 795.52 0.00 0.00 2509.86 1755.43 8925.38 00:35:21.394 [2024-12-15T12:15:29.301Z] =================================================================================================================== 00:35:21.394 [2024-12-15T12:15:29.301Z] Total : 6364.13 795.52 0.00 0.00 2509.86 1755.43 8925.38 00:35:21.394 { 00:35:21.394 "results": [ 00:35:21.394 { 00:35:21.394 "job": "nvme0n1", 00:35:21.394 "core_mask": "0x2", 00:35:21.394 "workload": "randwrite", 00:35:21.394 "status": "finished", 00:35:21.394 "queue_depth": 16, 00:35:21.394 "io_size": 131072, 00:35:21.394 "runtime": 2.003731, 00:35:21.394 "iops": 6364.127719738827, 00:35:21.394 "mibps": 795.5159649673534, 00:35:21.394 "io_failed": 0, 00:35:21.394 "io_timeout": 0, 00:35:21.394 "avg_latency_us": 2509.8605867240244, 00:35:21.394 "min_latency_us": 1755.4285714285713, 00:35:21.394 "max_latency_us": 8925.379047619048 00:35:21.394 } 00:35:21.394 ], 00:35:21.394 "core_count": 1 00:35:21.394 } 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.394 | .driver_specific 00:35:21.394 | .nvme_error 00:35:21.394 | .status_code 00:35:21.394 | .command_transient_transport_error' 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 412 > 0 )) 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197840 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197840 ']' 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197840 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.394 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197840 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197840' 00:35:21.654 killing process with pid 1197840 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197840 00:35:21.654 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.654 00:35:21.654 Latency(us) 00:35:21.654 [2024-12-15T12:15:29.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.654 [2024-12-15T12:15:29.561Z] =================================================================================================================== 00:35:21.654 [2024-12-15T12:15:29.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197840 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1196125 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196125 ']' 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196125 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196125 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196125' 00:35:21.654 killing process with pid 1196125 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196125 00:35:21.654 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196125 00:35:21.913 00:35:21.913 real 0m13.793s 00:35:21.913 user 0m26.473s 00:35:21.913 sys 0m4.489s 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.913 ************************************ 00:35:21.913 END TEST nvmf_digest_error 00:35:21.913 ************************************ 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.913 rmmod nvme_tcp 00:35:21.913 rmmod nvme_fabrics 00:35:21.913 rmmod nvme_keyring 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1196125 ']' 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1196125 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1196125 ']' 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1196125 00:35:21.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1196125) - No such process 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1196125 is not found' 00:35:21.913 Process with pid 1196125 is not found 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.913 13:15:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.449 00:35:24.449 real 0m35.990s 00:35:24.449 user 0m55.003s 00:35:24.449 sys 0m13.445s 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.449 ************************************ 00:35:24.449 END TEST nvmf_digest 00:35:24.449 ************************************ 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.449 ************************************ 00:35:24.449 START TEST nvmf_bdevperf 00:35:24.449 ************************************ 00:35:24.449 13:15:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:24.449 * Looking for test storage... 00:35:24.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.449 --rc genhtml_branch_coverage=1 00:35:24.449 --rc genhtml_function_coverage=1 00:35:24.449 --rc genhtml_legend=1 00:35:24.449 --rc geninfo_all_blocks=1 00:35:24.449 --rc geninfo_unexecuted_blocks=1 00:35:24.449 00:35:24.449 ' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.449 --rc genhtml_branch_coverage=1 00:35:24.449 --rc genhtml_function_coverage=1 00:35:24.449 --rc genhtml_legend=1 00:35:24.449 --rc geninfo_all_blocks=1 00:35:24.449 --rc geninfo_unexecuted_blocks=1 00:35:24.449 00:35:24.449 ' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.449 --rc genhtml_branch_coverage=1 00:35:24.449 --rc genhtml_function_coverage=1 00:35:24.449 --rc genhtml_legend=1 00:35:24.449 --rc geninfo_all_blocks=1 00:35:24.449 --rc geninfo_unexecuted_blocks=1 00:35:24.449 00:35:24.449 ' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.449 --rc genhtml_branch_coverage=1 00:35:24.449 --rc genhtml_function_coverage=1 00:35:24.449 --rc genhtml_legend=1 00:35:24.449 --rc geninfo_all_blocks=1 00:35:24.449 --rc geninfo_unexecuted_blocks=1 00:35:24.449 00:35:24.449 ' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.449 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.450 13:15:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:31.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:31.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:31.020 Found net devices under 0000:af:00.0: cvl_0_0 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:31.020 Found net devices under 0000:af:00.1: cvl_0_1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:31.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:35:31.020 00:35:31.020 --- 10.0.0.2 ping statistics --- 00:35:31.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.020 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:35:31.020 00:35:31.020 --- 10.0.0.1 ping statistics --- 00:35:31.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.020 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:35:31.020 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1201808 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1201808 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1201808 ']' 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.021 13:15:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 [2024-12-15 13:15:38.032453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:31.021 [2024-12-15 13:15:38.032503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.021 [2024-12-15 13:15:38.112874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:31.021 [2024-12-15 13:15:38.135230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.021 [2024-12-15 13:15:38.135269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.021 [2024-12-15 13:15:38.135276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.021 [2024-12-15 13:15:38.135282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.021 [2024-12-15 13:15:38.135287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.021 [2024-12-15 13:15:38.136504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.021 [2024-12-15 13:15:38.136615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.021 [2024-12-15 13:15:38.136616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 [2024-12-15 13:15:38.268114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 Malloc0 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.021 [2024-12-15 13:15:38.326265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:31.021 { 00:35:31.021 "params": { 00:35:31.021 "name": "Nvme$subsystem", 00:35:31.021 "trtype": "$TEST_TRANSPORT", 00:35:31.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.021 "adrfam": "ipv4", 00:35:31.021 "trsvcid": "$NVMF_PORT", 00:35:31.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.021 "hdgst": ${hdgst:-false}, 00:35:31.021 "ddgst": ${ddgst:-false} 00:35:31.021 }, 00:35:31.021 "method": "bdev_nvme_attach_controller" 00:35:31.021 } 00:35:31.021 EOF 00:35:31.021 )") 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:31.021 13:15:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:31.021 "params": { 00:35:31.021 "name": "Nvme1", 00:35:31.021 "trtype": "tcp", 00:35:31.021 "traddr": "10.0.0.2", 00:35:31.021 "adrfam": "ipv4", 00:35:31.021 "trsvcid": "4420", 00:35:31.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:31.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:31.021 "hdgst": false, 00:35:31.021 "ddgst": false 00:35:31.021 }, 00:35:31.021 "method": "bdev_nvme_attach_controller" 00:35:31.021 }' 00:35:31.021 [2024-12-15 13:15:38.377174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:31.021 [2024-12-15 13:15:38.377223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201838 ] 00:35:31.021 [2024-12-15 13:15:38.457101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.021 [2024-12-15 13:15:38.479353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.021 Running I/O for 1 seconds... 00:35:31.959 11447.00 IOPS, 44.71 MiB/s 00:35:31.959 Latency(us) 00:35:31.959 [2024-12-15T12:15:39.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:31.959 Verification LBA range: start 0x0 length 0x4000 00:35:31.959 Nvme1n1 : 1.01 11538.22 45.07 0.00 0.00 11052.04 2215.74 14480.34 00:35:31.959 [2024-12-15T12:15:39.866Z] =================================================================================================================== 00:35:31.959 [2024-12-15T12:15:39.866Z] Total : 11538.22 45.07 0.00 0.00 11052.04 2215.74 14480.34 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1202063 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:31.959 { 00:35:31.959 "params": { 00:35:31.959 "name": "Nvme$subsystem", 00:35:31.959 "trtype": "$TEST_TRANSPORT", 00:35:31.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.959 "adrfam": "ipv4", 00:35:31.959 "trsvcid": "$NVMF_PORT", 00:35:31.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.959 "hdgst": ${hdgst:-false}, 00:35:31.959 "ddgst": ${ddgst:-false} 00:35:31.959 }, 00:35:31.959 "method": "bdev_nvme_attach_controller" 00:35:31.959 } 00:35:31.959 EOF 00:35:31.959 )") 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:31.959 13:15:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:31.959 "params": { 00:35:31.959 "name": "Nvme1", 00:35:31.959 "trtype": "tcp", 00:35:31.959 "traddr": "10.0.0.2", 00:35:31.959 "adrfam": "ipv4", 00:35:31.959 "trsvcid": "4420", 00:35:31.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:31.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:31.959 "hdgst": false, 00:35:31.959 "ddgst": false 00:35:31.959 }, 00:35:31.959 "method": "bdev_nvme_attach_controller" 00:35:31.959 }' 00:35:32.219 [2024-12-15 13:15:39.893536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:32.219 [2024-12-15 13:15:39.893585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202063 ] 00:35:32.219 [2024-12-15 13:15:39.968349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.219 [2024-12-15 13:15:39.988571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.478 Running I/O for 15 seconds... 00:35:34.351 11393.00 IOPS, 44.50 MiB/s [2024-12-15T12:15:43.200Z] 11459.00 IOPS, 44.76 MiB/s [2024-12-15T12:15:43.200Z] 13:15:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1201808 00:35:35.293 13:15:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:35.293 [2024-12-15 13:15:42.862309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.293 [2024-12-15 13:15:42.862784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.293 [2024-12-15 13:15:42.862791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.862801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.862807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.862815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.862951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.862958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.862966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.862972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.862989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.862995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.294 [2024-12-15 13:15:42.863449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.294 [2024-12-15 13:15:42.863458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:35.295 [2024-12-15 13:15:42.863960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.863974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.863988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.864003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.864012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.864032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.295 [2024-12-15 13:15:42.864040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.295 [2024-12-15 13:15:42.864048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:35.296 [2024-12-15 13:15:42.864411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.864419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b9920 is same with the state(6) to be set 00:35:35.296 [2024-12-15 13:15:42.864428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:35.296 [2024-12-15 13:15:42.864433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:35.296 [2024-12-15 13:15:42.864439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114048 len:8 PRP1 0x0 PRP2 0x0 00:35:35.296 [2024-12-15 13:15:42.864447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:35.296 [2024-12-15 13:15:42.867285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.296 [2024-12-15 13:15:42.867342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.296 [2024-12-15 13:15:42.867863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.296 [2024-12-15 13:15:42.867882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.296 [2024-12-15 13:15:42.867890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.296 [2024-12-15 13:15:42.868064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.296 [2024-12-15 13:15:42.868238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.296 [2024-12-15 13:15:42.868247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.296 [2024-12-15 13:15:42.868255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.296 [2024-12-15 13:15:42.868264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.296 [2024-12-15 13:15:42.880406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.296 [2024-12-15 13:15:42.880837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.296 [2024-12-15 13:15:42.880872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.296 [2024-12-15 13:15:42.880880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.296 [2024-12-15 13:15:42.881059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.296 [2024-12-15 13:15:42.881220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.296 [2024-12-15 13:15:42.881229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.296 [2024-12-15 13:15:42.881236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.296 [2024-12-15 13:15:42.881244] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.296 [2024-12-15 13:15:42.893223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.296 [2024-12-15 13:15:42.893641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.296 [2024-12-15 13:15:42.893682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.296 [2024-12-15 13:15:42.893707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.296 [2024-12-15 13:15:42.894240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.296 [2024-12-15 13:15:42.894421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.296 [2024-12-15 13:15:42.894431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.296 [2024-12-15 13:15:42.894438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.894444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.905952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.906319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.906327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.906487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.906646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.906655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.906661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.906667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.918799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.919233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.919280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.919304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.919903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.920453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.920462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.920469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.920475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.933997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.934525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.934548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.934558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.934811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.935075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.935089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.935103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.935113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.946879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.947319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.947365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.947390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.947987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.948536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.948545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.948552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.948559] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.959691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.960041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.960087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.960111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.960578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.960739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.960747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.960753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.960759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.974687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.975211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.975235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.975246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.975500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.975756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.975770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.975779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.975789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:42.987683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:42.988033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:42.988051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:42.988058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:42.988227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:42.988395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:42.988405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:42.988412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:42.988419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:43.000567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:43.000995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.297 [2024-12-15 13:15:43.001035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.297 [2024-12-15 13:15:43.001061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.297 [2024-12-15 13:15:43.001640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.297 [2024-12-15 13:15:43.001801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.297 [2024-12-15 13:15:43.001810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.297 [2024-12-15 13:15:43.001816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.297 [2024-12-15 13:15:43.001823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.297 [2024-12-15 13:15:43.013353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.297 [2024-12-15 13:15:43.013774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.013819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.013858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.014441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.014846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.014871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.014878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.014885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.026128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.026520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.026537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.026548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.026708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.026892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.026902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.026909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.026916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.038928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.039297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.039342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.039366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.039839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.040001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.040011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.040017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.040023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.051866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.052227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.052273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.052297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.052758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.052925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.052935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.052941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.052948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.064743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.065143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.065162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.065169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.065329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.065493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.065504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.065510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.065517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.077740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.078034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.078052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.078059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.078232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.078405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.078415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.078422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.078429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.090772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.091137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.091156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.091164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.091337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.091511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.091522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.091528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.091535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.103723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.104140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.104158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.104166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.104325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.104485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.104494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.104504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.104511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.116595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.116964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.117036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.117620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.118045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.118054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.118061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.118069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.129633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.130048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.130066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.298 [2024-12-15 13:15:43.130074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.298 [2024-12-15 13:15:43.130247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.298 [2024-12-15 13:15:43.130421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.298 [2024-12-15 13:15:43.130431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.298 [2024-12-15 13:15:43.130438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.298 [2024-12-15 13:15:43.130445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.298 [2024-12-15 13:15:43.142614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.298 [2024-12-15 13:15:43.143072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.298 [2024-12-15 13:15:43.143129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-12-15 13:15:43.143153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.299 [2024-12-15 13:15:43.143734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.299 [2024-12-15 13:15:43.143949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-12-15 13:15:43.143960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-12-15 13:15:43.143966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-12-15 13:15:43.143973] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-12-15 13:15:43.155463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-12-15 13:15:43.155854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-12-15 13:15:43.155872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-12-15 13:15:43.155880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.299 [2024-12-15 13:15:43.156048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.299 [2024-12-15 13:15:43.156220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-12-15 13:15:43.156229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-12-15 13:15:43.156236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-12-15 13:15:43.156242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-12-15 13:15:43.168442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-12-15 13:15:43.168847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-12-15 13:15:43.168866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-12-15 13:15:43.168874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.299 [2024-12-15 13:15:43.169043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.299 [2024-12-15 13:15:43.169215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-12-15 13:15:43.169224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-12-15 13:15:43.169231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-12-15 13:15:43.169238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-12-15 13:15:43.181365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-12-15 13:15:43.181788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-12-15 13:15:43.181806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-12-15 13:15:43.181814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.299 [2024-12-15 13:15:43.181989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.299 [2024-12-15 13:15:43.182158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-12-15 13:15:43.182168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-12-15 13:15:43.182174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-12-15 13:15:43.182181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.299 [2024-12-15 13:15:43.194250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.299 [2024-12-15 13:15:43.194656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.299 [2024-12-15 13:15:43.194701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.299 [2024-12-15 13:15:43.194739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.299 [2024-12-15 13:15:43.194950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.299 [2024-12-15 13:15:43.195122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.299 [2024-12-15 13:15:43.195132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.299 [2024-12-15 13:15:43.195139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.299 [2024-12-15 13:15:43.195146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 10181.33 IOPS, 39.77 MiB/s [2024-12-15T12:15:43.467Z] [2024-12-15 13:15:43.207285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.207682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.207708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.207888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.208062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.208072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.208078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.208085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.220279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.220685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.220704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.220712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.220910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.221095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.221106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.221113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.221120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.233501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.233882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.233901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.233909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.234093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.234281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.234291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.234298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.234304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.246806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.247277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.247297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.247306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.247503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.247699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.247710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.247717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.247726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.260038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.260460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.260479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.260487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.260670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.260878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.260890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.260898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.260906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.273148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.273551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.273569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.273577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.273749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.273928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.273938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.273948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.273955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.286152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.286577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.286586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.286770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.286962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.286973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.286980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.286987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.299202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.299627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.299645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.299653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.299840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.300014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.300025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.300031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.300038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.312302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.312732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-12-15 13:15:43.312750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.560 [2024-12-15 13:15:43.312758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.560 [2024-12-15 13:15:43.312959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.560 [2024-12-15 13:15:43.313143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.560 [2024-12-15 13:15:43.313153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.560 [2024-12-15 13:15:43.313161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.560 [2024-12-15 13:15:43.313168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.560 [2024-12-15 13:15:43.325477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.560 [2024-12-15 13:15:43.325896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.325915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.325923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.326107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.326293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.326302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.326309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.326317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.338716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.339142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.339161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.339170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.339354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.339540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.339550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.339557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.339566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.351948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.352294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.352313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.352322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.352506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.352692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.352702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.352709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.352717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.365020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.365440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.365458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.365469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.365643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.365816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.365831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.365838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.365845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.378177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.378567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.378587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.378595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.378779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.378969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.378980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.378987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.378994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.391447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.391884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.391904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.391912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.392096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.392281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.392291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.392298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.392305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.404853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.405311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.405330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.405339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.405534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.405735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.405746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.405755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.405763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.417923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.418268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.418286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.418294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.418467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.418641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.418650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.418657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.418663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.431090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.431508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.431526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.431535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.431718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.431910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.431922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.431929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.431937] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.444376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.444789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.444808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.561 [2024-12-15 13:15:43.444816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.561 [2024-12-15 13:15:43.445006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.561 [2024-12-15 13:15:43.445196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.561 [2024-12-15 13:15:43.445206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.561 [2024-12-15 13:15:43.445213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.561 [2024-12-15 13:15:43.445223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.561 [2024-12-15 13:15:43.457472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.561 [2024-12-15 13:15:43.457912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-12-15 13:15:43.457931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.562 [2024-12-15 13:15:43.457938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.562 [2024-12-15 13:15:43.458111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.562 [2024-12-15 13:15:43.458284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.562 [2024-12-15 13:15:43.458294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.562 [2024-12-15 13:15:43.458300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.562 [2024-12-15 13:15:43.458307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.470502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.470809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.470868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.470892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.471474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.472016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.472026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.472033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.472040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.483570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.483984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.484002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.484011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.484185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.484358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.484368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.484374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.484381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.496525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.496935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.496952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.496960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.497129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.497297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.497307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.497313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.497320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.509337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.509684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.509708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.509891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.510060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.510070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.510076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.510083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.522135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.522528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.522545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.522553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.522713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.522878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.522888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.522894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.522901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.534981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.535388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.535445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.535469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.822 [2024-12-15 13:15:43.535974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.822 [2024-12-15 13:15:43.536135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.822 [2024-12-15 13:15:43.536144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.822 [2024-12-15 13:15:43.536150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.822 [2024-12-15 13:15:43.536157] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.822 [2024-12-15 13:15:43.547795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.822 [2024-12-15 13:15:43.548163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.822 [2024-12-15 13:15:43.548209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.822 [2024-12-15 13:15:43.548233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.548776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.548961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.548971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.548977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.548984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.560636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.561058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.561104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.561127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.561709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.561930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.561940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.561946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.561953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.573438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.573850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.573868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.573876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.574035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.574195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.574207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.574213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.574220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.586238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.586655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.586680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.586845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.587028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.587038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.587045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.587051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.599094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.599519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.599563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.599586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.600127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.600297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.600307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.600313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.600320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.611819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.612167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.612184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.612191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.612350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.612509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.612518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.612524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.612534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.624564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.624969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.625014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.625038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.625619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.626181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.626192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.626198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.626204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.637297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.637710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.637727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.637735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.637910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.638080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.638089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.638095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.638102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.650231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.650673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.650718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.650742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.651243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.651405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.651413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.651419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.651425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.663047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.663463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.663480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.663487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.823 [2024-12-15 13:15:43.663646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.823 [2024-12-15 13:15:43.663806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.823 [2024-12-15 13:15:43.663815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.823 [2024-12-15 13:15:43.663821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.823 [2024-12-15 13:15:43.663834] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.823 [2024-12-15 13:15:43.675902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.823 [2024-12-15 13:15:43.676328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.823 [2024-12-15 13:15:43.676376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.823 [2024-12-15 13:15:43.676401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.824 [2024-12-15 13:15:43.676999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.824 [2024-12-15 13:15:43.677466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.824 [2024-12-15 13:15:43.677476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.824 [2024-12-15 13:15:43.677482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.824 [2024-12-15 13:15:43.677489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.824 [2024-12-15 13:15:43.688913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.824 [2024-12-15 13:15:43.689341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.824 [2024-12-15 13:15:43.689359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.824 [2024-12-15 13:15:43.689367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.824 [2024-12-15 13:15:43.689540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.824 [2024-12-15 13:15:43.689715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.824 [2024-12-15 13:15:43.689725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.824 [2024-12-15 13:15:43.689731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.824 [2024-12-15 13:15:43.689738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.824 [2024-12-15 13:15:43.701784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.824 [2024-12-15 13:15:43.702205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.824 [2024-12-15 13:15:43.702222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.824 [2024-12-15 13:15:43.702230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.824 [2024-12-15 13:15:43.702394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.824 [2024-12-15 13:15:43.702552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.824 [2024-12-15 13:15:43.702562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.824 [2024-12-15 13:15:43.702568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.824 [2024-12-15 13:15:43.702575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:35.824 [2024-12-15 13:15:43.714505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:35.824 [2024-12-15 13:15:43.714861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.824 [2024-12-15 13:15:43.714908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:35.824 [2024-12-15 13:15:43.714931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:35.824 [2024-12-15 13:15:43.715394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:35.824 [2024-12-15 13:15:43.715555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:35.824 [2024-12-15 13:15:43.715564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:35.824 [2024-12-15 13:15:43.715570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:35.824 [2024-12-15 13:15:43.715576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.084 [2024-12-15 13:15:43.727573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.084 [2024-12-15 13:15:43.727999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.084 [2024-12-15 13:15:43.728018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.084 [2024-12-15 13:15:43.728026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.084 [2024-12-15 13:15:43.728215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.084 [2024-12-15 13:15:43.728390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.084 [2024-12-15 13:15:43.728400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.084 [2024-12-15 13:15:43.728406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.084 [2024-12-15 13:15:43.728413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.084 [2024-12-15 13:15:43.740544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.084 [2024-12-15 13:15:43.740977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.084 [2024-12-15 13:15:43.741025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.084 [2024-12-15 13:15:43.741049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.084 [2024-12-15 13:15:43.741306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.741466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.741479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.741484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.741491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.753339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.753729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.753746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.753754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.753937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.754106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.754116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.754122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.754129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.766129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.766543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.766589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.766613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.767211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.767707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.767716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.767723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.767730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.778915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.779308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.779325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.779332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.779491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.779650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.779659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.779666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.779675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.791705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.792100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.792118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.792125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.792283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.792443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.792452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.792458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.792465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.804446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.804818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.804839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.804847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.805006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.805165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.805175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.805181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.805187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.817227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.817567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.817584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.817591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.817750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.817933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.817943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.817950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.817956] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.830005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.830434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.830493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.830517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.830895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.831057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.831067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.831073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.831079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.842867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.843277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.843323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.843347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.843866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.844028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.844038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.844044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.844050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.855601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.855901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.855919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.855927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.856086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.856246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.856255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.085 [2024-12-15 13:15:43.856262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.085 [2024-12-15 13:15:43.856268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.085 [2024-12-15 13:15:43.868462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.085 [2024-12-15 13:15:43.868895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.085 [2024-12-15 13:15:43.868938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.085 [2024-12-15 13:15:43.868961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.085 [2024-12-15 13:15:43.869522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.085 [2024-12-15 13:15:43.869929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.085 [2024-12-15 13:15:43.869949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.869963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.869977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.884013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.884518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.884540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.884551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.884807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.885070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.885084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.885094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.885105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.897012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.897463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.897487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.897656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.897832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.897843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.897849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.897856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.910004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.910429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.910446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.910453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.910613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.910772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.910784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.910790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.910797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.922821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.923237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.923277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.923302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.923864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.924049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.924059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.924065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.924071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.935660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.936078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.936095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.936102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.936261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.936420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.936430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.936436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.936442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.948468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.948874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.948913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.948939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.949521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.949988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.949998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.950004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.950011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.961272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.961687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.961704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.961711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.961893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.962062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.962071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.962078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.962084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.974068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.974480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.974497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.974505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.974664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.974829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.974839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.974845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.974852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.086 [2024-12-15 13:15:43.987021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.086 [2024-12-15 13:15:43.987350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.086 [2024-12-15 13:15:43.987368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.086 [2024-12-15 13:15:43.987375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.086 [2024-12-15 13:15:43.987543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.086 [2024-12-15 13:15:43.987711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.086 [2024-12-15 13:15:43.987721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.086 [2024-12-15 13:15:43.987727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.086 [2024-12-15 13:15:43.987733] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:43.999785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.000140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.000161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.000169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.000328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.000487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.000497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.000503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.000509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.012624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.012963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.012980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.012987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.013146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.013305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.013314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.013320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.013326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.025371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.025778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.025837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.025863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.026346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.026515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.026524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.026530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.026536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.038138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.038565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.038583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.038590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.038761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.038935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.038946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.038953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.038959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.050931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.051342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.051379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.051405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.051982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.052178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.052196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.052211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.052224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.065797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.066309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.066355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.066379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.066974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.067378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.067391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.067401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.067411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.078710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.347 [2024-12-15 13:15:44.079144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.347 [2024-12-15 13:15:44.079189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.347 [2024-12-15 13:15:44.079212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.347 [2024-12-15 13:15:44.079794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.347 [2024-12-15 13:15:44.080325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.347 [2024-12-15 13:15:44.080335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.347 [2024-12-15 13:15:44.080345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.347 [2024-12-15 13:15:44.080353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.347 [2024-12-15 13:15:44.091526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.091896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.091943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.091968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.092500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.092872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.092891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.092905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.092918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.105868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.106415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.106440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.107037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.107309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.107321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.107329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.107338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.118698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.119026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.119051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.119211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.119371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.119381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.119387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.119394] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.131427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.131762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.131779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.131786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.131972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.132141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.132151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.132157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.132164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.144193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.144606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.144623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.144630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.144790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.144975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.144986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.144992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.144998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.157008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.157441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.157458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.157466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.157625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.157785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.157794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.157803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.157811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.169982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.170390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.170408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.170418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.170586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.170754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.170764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.170770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.170777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.182837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.183222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.183240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.183248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.183415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.183585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.183594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.183601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.183607] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.195719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.196116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.196161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.196184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.196628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.196789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.196798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.196804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.196810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 7636.00 IOPS, 29.83 MiB/s [2024-12-15T12:15:44.255Z] [2024-12-15 13:15:44.208561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.208929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.348 [2024-12-15 13:15:44.208947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.348 [2024-12-15 13:15:44.208955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.348 [2024-12-15 13:15:44.209129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.348 [2024-12-15 13:15:44.209291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.348 [2024-12-15 13:15:44.209300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.348 [2024-12-15 13:15:44.209306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.348 [2024-12-15 13:15:44.209312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.348 [2024-12-15 13:15:44.221473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.348 [2024-12-15 13:15:44.221861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.349 [2024-12-15 13:15:44.221907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.349 [2024-12-15 13:15:44.221931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.349 [2024-12-15 13:15:44.222436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.349 [2024-12-15 13:15:44.222606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.349 [2024-12-15 13:15:44.222616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.349 [2024-12-15 13:15:44.222622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.349 [2024-12-15 13:15:44.222629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.349 [2024-12-15 13:15:44.234324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.349 [2024-12-15 13:15:44.234663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.349 [2024-12-15 13:15:44.234681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.349 [2024-12-15 13:15:44.234688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.349 [2024-12-15 13:15:44.234870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.349 [2024-12-15 13:15:44.235039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.349 [2024-12-15 13:15:44.235049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.349 [2024-12-15 13:15:44.235055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.349 [2024-12-15 13:15:44.235062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.349 [2024-12-15 13:15:44.247195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.349 [2024-12-15 13:15:44.247607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.349 [2024-12-15 13:15:44.247623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.349 [2024-12-15 13:15:44.247630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.349 [2024-12-15 13:15:44.247790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.349 [2024-12-15 13:15:44.247977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.349 [2024-12-15 13:15:44.247987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.349 [2024-12-15 13:15:44.247998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.349 [2024-12-15 13:15:44.248006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.260064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.260456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.260474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.260481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.260640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.260800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.260809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.260815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.260822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.273008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.273437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.273454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.273462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.273621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.273781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.273790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.273797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.273803] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.285822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.286232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.286250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.286257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.286417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.286576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.286585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.286591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.286597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.298573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.298991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.299043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.299067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.299568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.299728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.299737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.299743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.299749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.311362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.311759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.311775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.311783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.311969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.312138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.312148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.312155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.312162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.324142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.324555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.324572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.324580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.324739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.324922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.324932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.324938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.324945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.336941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.337347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.337391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.337422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.337948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.338118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.338127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.338134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.338141] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.349689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.350121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.350138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.350146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.350305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.350465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.350474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.350480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.350486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.362513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.362926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.362944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.362951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.363110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.363269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.363279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.363285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.610 [2024-12-15 13:15:44.363291] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.610 [2024-12-15 13:15:44.375269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.610 [2024-12-15 13:15:44.375679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.610 [2024-12-15 13:15:44.375695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.610 [2024-12-15 13:15:44.375703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.610 [2024-12-15 13:15:44.375868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.610 [2024-12-15 13:15:44.376058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.610 [2024-12-15 13:15:44.376068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.610 [2024-12-15 13:15:44.376075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.376081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.388027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.388433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.388450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.388457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.388615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.388775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.388784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.388790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.388795] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.400771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.401175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.401200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.401359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.401518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.401528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.401534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.401540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.413730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.414112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.414181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.414762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.415032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.415041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.415052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.415059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.426697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.427063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.427081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.427089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.427256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.427425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.427434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.427441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.427448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.439553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.439977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.440023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.440047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.440630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.441215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.441225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.441232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.441238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.452489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.452918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.452962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.452988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.453571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.454173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.454222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.454230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.454238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.465431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.465850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.465875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.466052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.466222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.466232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.466239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.466246] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.478423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.478836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.478855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.478862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.479036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.479208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.479218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.479225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.479231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.491450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.491878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.491897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.491904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.492087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.492257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.492267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.492274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.611 [2024-12-15 13:15:44.492280] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.611 [2024-12-15 13:15:44.504481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.611 [2024-12-15 13:15:44.504810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.611 [2024-12-15 13:15:44.504833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.611 [2024-12-15 13:15:44.504845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.611 [2024-12-15 13:15:44.505018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.611 [2024-12-15 13:15:44.505194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.611 [2024-12-15 13:15:44.505203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.611 [2024-12-15 13:15:44.505209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.612 [2024-12-15 13:15:44.505216] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.872 [2024-12-15 13:15:44.517593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.872 [2024-12-15 13:15:44.518026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-12-15 13:15:44.518044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.872 [2024-12-15 13:15:44.518053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.872 [2024-12-15 13:15:44.518225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.872 [2024-12-15 13:15:44.518398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.872 [2024-12-15 13:15:44.518408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.872 [2024-12-15 13:15:44.518417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.872 [2024-12-15 13:15:44.518425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.872 [2024-12-15 13:15:44.530615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.872 [2024-12-15 13:15:44.531085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-12-15 13:15:44.531104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.872 [2024-12-15 13:15:44.531111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.531285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.531459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.531469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.531476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.531483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.543990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.544344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.544363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.544372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.544566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.544768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.544779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.544786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.544793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.557097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.557536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.557554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.557563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.557737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.557918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.557928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.557935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.557941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.570288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.570723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.570741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.570749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.570939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.571125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.571135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.571142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.571149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.583618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.584070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.584089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.584098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.584281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.584467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.584477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.584488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.584496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.596734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.597172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.597191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.597199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.597373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.597546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.597556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.597562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.597569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.609705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.610132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.610151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.610159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.610332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.610506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.610516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.610523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.610530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.622685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.623101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.623119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.623126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.623299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.623474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.623485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.623492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.623499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.635675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.636016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.636035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.636042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.636211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.636379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.636389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.636395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.636401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.648583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.648939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.648957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.648965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.873 [2024-12-15 13:15:44.649136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.873 [2024-12-15 13:15:44.649295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.873 [2024-12-15 13:15:44.649305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.873 [2024-12-15 13:15:44.649311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.873 [2024-12-15 13:15:44.649317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.873 [2024-12-15 13:15:44.661464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.873 [2024-12-15 13:15:44.661746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-12-15 13:15:44.661764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.873 [2024-12-15 13:15:44.661771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.661937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.662098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.662107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.662113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.662119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.674369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.674786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.674804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.674816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.674991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.675162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.675171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.675178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.675184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.687342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.687708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.687726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.687733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.687917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.688087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.688096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.688103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.688110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.700161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.700552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.700599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.700625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.701226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.701811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.701820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.701840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.701848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.713134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.713428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.713445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.713453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.713622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.713793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.713804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.713810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.713817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.725955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.726318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.726336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.726344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.726502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.726662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.726672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.726678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.726684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.738861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.739217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.739242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.739402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.739562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.739571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.739577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.739583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.751708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.752057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.752075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.752093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.752253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.752413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.752423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.752429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.752439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.764543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:36.874 [2024-12-15 13:15:44.764887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-12-15 13:15:44.764905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:36.874 [2024-12-15 13:15:44.764913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:36.874 [2024-12-15 13:15:44.765081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:36.874 [2024-12-15 13:15:44.765249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:36.874 [2024-12-15 13:15:44.765259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:36.874 [2024-12-15 13:15:44.765266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:36.874 [2024-12-15 13:15:44.765273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:36.874 [2024-12-15 13:15:44.777562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.135 [2024-12-15 13:15:44.777979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.135 [2024-12-15 13:15:44.778027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.135 [2024-12-15 13:15:44.778050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.135 [2024-12-15 13:15:44.778632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.135 [2024-12-15 13:15:44.779140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.135 [2024-12-15 13:15:44.779150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.135 [2024-12-15 13:15:44.779157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.135 [2024-12-15 13:15:44.779164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.135 [2024-12-15 13:15:44.790488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.135 [2024-12-15 13:15:44.790920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.135 [2024-12-15 13:15:44.790966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.135 [2024-12-15 13:15:44.790989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.135 [2024-12-15 13:15:44.791572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.135 [2024-12-15 13:15:44.792132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.135 [2024-12-15 13:15:44.792142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.135 [2024-12-15 13:15:44.792149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.135 [2024-12-15 13:15:44.792156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.135 [2024-12-15 13:15:44.803384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.135 [2024-12-15 13:15:44.803782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.135 [2024-12-15 13:15:44.803840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.135 [2024-12-15 13:15:44.803864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.135 [2024-12-15 13:15:44.804447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.135 [2024-12-15 13:15:44.805044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.805072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.805093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.805112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.816334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.816653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.816669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.816676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.816841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.817002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.817012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.817019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.817025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.829166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.829643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.829688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.829712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.830137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.830298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.830308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.830314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.830320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.841965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.842251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.842269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.842279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.842439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.842599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.842609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.842615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.842620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.854857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.855134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.855151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.855159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.855318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.855477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.855486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.855493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.855499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.867605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.867968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.867986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.867994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.868162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.868331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.868342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.868348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.868354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.880491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.880913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.880964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.880972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.881148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.881307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.881320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.881326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.881332] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.893438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.893866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.893901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.893909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.894077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.894246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.894256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.894262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.894268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.906491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.906911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.906958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.906983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.907411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.907571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.907580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.907586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.907592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.919324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.919738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.919755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.919762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.919927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.920086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.920096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.920102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.136 [2024-12-15 13:15:44.920114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.136 [2024-12-15 13:15:44.932232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.136 [2024-12-15 13:15:44.932638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.136 [2024-12-15 13:15:44.932655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.136 [2024-12-15 13:15:44.932663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.136 [2024-12-15 13:15:44.932822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.136 [2024-12-15 13:15:44.933011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.136 [2024-12-15 13:15:44.933021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.136 [2024-12-15 13:15:44.933027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.933033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:44.945179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:44.945542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:44.945587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:44.945611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:44.946124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:44.946294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:44.946303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:44.946310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.946316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:44.958136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:44.958552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:44.958569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:44.958577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:44.958735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:44.958899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:44.958909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:44.958915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.958922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:44.970873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:44.971305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:44.971350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:44.971374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:44.971969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:44.972539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:44.972557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:44.972572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.972586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:44.985958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:44.986473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:44.986517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:44.986540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:44.987137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:44.987429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:44.987442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:44.987451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.987462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:44.998875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:44.999303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:44.999321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:44.999329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:44.999498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:44.999665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:44.999675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:44.999682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:44.999688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:45.011638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:45.012056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:45.012073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:45.012081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:45.012244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:45.012403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:45.012412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:45.012418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:45.012425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:45.024400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:45.024812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:45.024833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:45.024841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:45.025001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:45.025161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:45.025170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:45.025176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:45.025182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.137 [2024-12-15 13:15:45.037244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.137 [2024-12-15 13:15:45.037644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.137 [2024-12-15 13:15:45.037661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.137 [2024-12-15 13:15:45.037669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.137 [2024-12-15 13:15:45.037843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.137 [2024-12-15 13:15:45.038013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.137 [2024-12-15 13:15:45.038023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.137 [2024-12-15 13:15:45.038030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.137 [2024-12-15 13:15:45.038037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.050062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.050453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.050470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.050477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.050637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.050796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.050808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.050814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.050821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.062871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.063211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.063227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.063234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.063392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.063551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.063560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.063567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.063573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.075602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.076012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.076037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.076196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.076356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.076365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.076372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.076378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.088348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.088758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.088803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.088838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.089424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.089814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.089831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.089838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.089854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.101264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.101685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.101702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.101710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.101897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.102066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.102076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.102082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.102088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.113996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.114413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.114438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.114597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.114756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.114766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.114772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.114778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.126750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.127076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.127093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.127101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.127260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.127419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.127428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.127435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.127441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.139617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.139963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.139983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.139991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.140150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.140310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.140319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.140325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.140331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.152376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.152728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.152745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.152752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.152937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.153105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.153115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.153122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.153129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.165130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.165543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.165586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.165612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.166142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.166303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.166311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.166317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.166323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.177864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.178271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.178288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.178295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.404 [2024-12-15 13:15:45.178457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.404 [2024-12-15 13:15:45.178617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.404 [2024-12-15 13:15:45.178626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.404 [2024-12-15 13:15:45.178632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.404 [2024-12-15 13:15:45.178639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.404 [2024-12-15 13:15:45.190708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.404 [2024-12-15 13:15:45.191168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.404 [2024-12-15 13:15:45.191214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.404 [2024-12-15 13:15:45.191239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.191756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.191944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.191954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.191961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.191968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 6108.80 IOPS, 23.86 MiB/s [2024-12-15T12:15:45.312Z] [2024-12-15 13:15:45.204950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.205307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.205325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.205333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.205502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.205670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.205680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.205686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.205692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.217735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.218128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.218146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.218154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.218313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.218472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.218485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.218492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.218498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.230577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.230996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.231015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.231023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.231196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.231356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.231365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.231371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.231377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.243361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.243786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.243843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.243868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.244351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.244511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.244521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.244527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.244534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.256157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.256568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.256611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.256636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.257196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.257357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.257367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.257373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.257383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.268893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.269308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.269353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.269377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.269973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.270166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.270176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.270181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.270188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.281614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.282030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.282049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.282055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.282215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.282375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.282384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.282390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.282397] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.294433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.294858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.294875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.294883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.295042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.295202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.295211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.295217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.295223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.405 [2024-12-15 13:15:45.307276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.405 [2024-12-15 13:15:45.307697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.405 [2024-12-15 13:15:45.307748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.405 [2024-12-15 13:15:45.307772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.405 [2024-12-15 13:15:45.308360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.405 [2024-12-15 13:15:45.308530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.405 [2024-12-15 13:15:45.308540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.405 [2024-12-15 13:15:45.308546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.405 [2024-12-15 13:15:45.308554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.320109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.320528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.320546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.320554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.320723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.320898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.320908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.320914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.320921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.332965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.333331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.333348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.333355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.333514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.333673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.333682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.333689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.333695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.345815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.346233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.346282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.346306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.346907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.347486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.347496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.347502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.347509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.358539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.358944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.358962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.358969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.359128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.359287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.359296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.359302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.359309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.371401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.371793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.371810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.371817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.371983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.372145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.372154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.372160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.372166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.384180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.384588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.384630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.384655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.665 [2024-12-15 13:15:45.385180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.665 [2024-12-15 13:15:45.385341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.665 [2024-12-15 13:15:45.385352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.665 [2024-12-15 13:15:45.385359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.665 [2024-12-15 13:15:45.385365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.665 [2024-12-15 13:15:45.397043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.665 [2024-12-15 13:15:45.397457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.665 [2024-12-15 13:15:45.397497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.665 [2024-12-15 13:15:45.397522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.398121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.398336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.398345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.398351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.398357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.409884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.410354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.410377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.410973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.411498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.411508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.411514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.411521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.422631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.423042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.423059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.423066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.423225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.423385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.423394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.423401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.423407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.435435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.435775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.435792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.435799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.435986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.436155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.436164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.436171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.436178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.448354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.448768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.448813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.448855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.449437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.449627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.449637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.449643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.449650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.461388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.461810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.461834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.461842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.462010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.462179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.462188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.462194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.462201] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.474215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.474616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.474636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.474644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.474804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.474970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.474980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.474986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.474992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.487025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.487445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.487463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.487470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.487629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.487788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.487798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.487804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.487811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.499903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.500314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.500332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.500339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.500502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.500662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.500673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.500679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.500686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.512733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.513192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.513219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.666 [2024-12-15 13:15:45.513786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.666 [2024-12-15 13:15:45.513963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.666 [2024-12-15 13:15:45.513973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.666 [2024-12-15 13:15:45.513979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.666 [2024-12-15 13:15:45.513987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.666 [2024-12-15 13:15:45.525658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.666 [2024-12-15 13:15:45.526094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.666 [2024-12-15 13:15:45.526140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.666 [2024-12-15 13:15:45.526164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.667 [2024-12-15 13:15:45.526663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.667 [2024-12-15 13:15:45.526840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.667 [2024-12-15 13:15:45.526850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.667 [2024-12-15 13:15:45.526857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.667 [2024-12-15 13:15:45.526865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.667 [2024-12-15 13:15:45.538525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.667 [2024-12-15 13:15:45.538955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.667 [2024-12-15 13:15:45.539001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.667 [2024-12-15 13:15:45.539025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.667 [2024-12-15 13:15:45.539601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.667 [2024-12-15 13:15:45.539771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.667 [2024-12-15 13:15:45.539781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.667 [2024-12-15 13:15:45.539789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.667 [2024-12-15 13:15:45.539796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.667 [2024-12-15 13:15:45.551425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.667 [2024-12-15 13:15:45.551711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.667 [2024-12-15 13:15:45.551728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.667 [2024-12-15 13:15:45.551735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.667 [2024-12-15 13:15:45.551909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.667 [2024-12-15 13:15:45.552078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.667 [2024-12-15 13:15:45.552088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.667 [2024-12-15 13:15:45.552098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.667 [2024-12-15 13:15:45.552105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.667 [2024-12-15 13:15:45.564202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.667 [2024-12-15 13:15:45.564638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.667 [2024-12-15 13:15:45.564682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.667 [2024-12-15 13:15:45.564705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.667 [2024-12-15 13:15:45.565303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.667 [2024-12-15 13:15:45.565902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.667 [2024-12-15 13:15:45.565930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.667 [2024-12-15 13:15:45.565951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.667 [2024-12-15 13:15:45.565958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.926 [2024-12-15 13:15:45.577097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.926 [2024-12-15 13:15:45.577428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.926 [2024-12-15 13:15:45.577446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.926 [2024-12-15 13:15:45.577454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.926 [2024-12-15 13:15:45.577622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.926 [2024-12-15 13:15:45.577791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.926 [2024-12-15 13:15:45.577800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.926 [2024-12-15 13:15:45.577807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.926 [2024-12-15 13:15:45.577814] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.926 [2024-12-15 13:15:45.590003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.926 [2024-12-15 13:15:45.590426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.926 [2024-12-15 13:15:45.590444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.926 [2024-12-15 13:15:45.590453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.926 [2024-12-15 13:15:45.590621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.926 [2024-12-15 13:15:45.590789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.926 [2024-12-15 13:15:45.590799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.926 [2024-12-15 13:15:45.590805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.926 [2024-12-15 13:15:45.590812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.926 [2024-12-15 13:15:45.602908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.926 [2024-12-15 13:15:45.603337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.926 [2024-12-15 13:15:45.603354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.926 [2024-12-15 13:15:45.603387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.926 [2024-12-15 13:15:45.603937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.926 [2024-12-15 13:15:45.604108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.926 [2024-12-15 13:15:45.604118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.926 [2024-12-15 13:15:45.604125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.926 [2024-12-15 13:15:45.604131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.926 [2024-12-15 13:15:45.615688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.926 [2024-12-15 13:15:45.616112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.926 [2024-12-15 13:15:45.616157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.926 [2024-12-15 13:15:45.616181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.616762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.616962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.616972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.616978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.616985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.628544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.628971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.628978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.629138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.629297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.629306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.629312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.629318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.641299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.641632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.641649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.641659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.641818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.642006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.642017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.642023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.642029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.654024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.654367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.654392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.654551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.654710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.654719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.654725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.654731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.666813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.667220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.667244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.667817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.668005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.668015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.668022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.668029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.679627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.679975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.679993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.680001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.680161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.680324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.680334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.680340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.680346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.692391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.692839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.692885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.692909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.693314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.693475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.693484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.693491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.693497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.705201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.705587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.705605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.705613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.705772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.705959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.705969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.705976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.705983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.718134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.718552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.718607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.718632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.719166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.719327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.719336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.719348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.719356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.730935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.731273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.731290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.731297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.731456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.927 [2024-12-15 13:15:45.731615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.927 [2024-12-15 13:15:45.731624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.927 [2024-12-15 13:15:45.731630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.927 [2024-12-15 13:15:45.731637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.927 [2024-12-15 13:15:45.743667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.927 [2024-12-15 13:15:45.744091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.927 [2024-12-15 13:15:45.744148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.927 [2024-12-15 13:15:45.744173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.927 [2024-12-15 13:15:45.744755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.745329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.745339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.745345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.745352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.756422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.756764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.756781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.756788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.756973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.757142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.757152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.757158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.757165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.769173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.769561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.769578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.769586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.769745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.769928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.769938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.769944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.769951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.781956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.782300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.782317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.782324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.782483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.782642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.782651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.782658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.782664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.794725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.795146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.795163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.795170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.795329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.795489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.795498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.795504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.795510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.807540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.807897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.807975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.808482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.808643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.808653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.808658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.808665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:37.928 [2024-12-15 13:15:45.820325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:37.928 [2024-12-15 13:15:45.820735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.928 [2024-12-15 13:15:45.820753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:37.928 [2024-12-15 13:15:45.820760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:37.928 [2024-12-15 13:15:45.820944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:37.928 [2024-12-15 13:15:45.821113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:37.928 [2024-12-15 13:15:45.821123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:37.928 [2024-12-15 13:15:45.821129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:37.928 [2024-12-15 13:15:45.821135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.188 [2024-12-15 13:15:45.833303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.188 [2024-12-15 13:15:45.833654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.188 [2024-12-15 13:15:45.833671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.188 [2024-12-15 13:15:45.833679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.188 [2024-12-15 13:15:45.833852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.188 [2024-12-15 13:15:45.834022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.188 [2024-12-15 13:15:45.834033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.188 [2024-12-15 13:15:45.834039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.188 [2024-12-15 13:15:45.834045] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.188 [2024-12-15 13:15:45.846221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.188 [2024-12-15 13:15:45.846598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.188 [2024-12-15 13:15:45.846615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.188 [2024-12-15 13:15:45.846622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.188 [2024-12-15 13:15:45.846790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.188 [2024-12-15 13:15:45.846968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.188 [2024-12-15 13:15:45.846978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.188 [2024-12-15 13:15:45.846985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.188 [2024-12-15 13:15:45.846991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1201808 Killed "${NVMF_APP[@]}" "$@" 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:38.188 [2024-12-15 13:15:45.859211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.188 [2024-12-15 13:15:45.859558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.188 [2024-12-15 13:15:45.859576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.188 [2024-12-15 13:15:45.859583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.188 [2024-12-15 13:15:45.859752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.188 [2024-12-15 13:15:45.859925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.188 [2024-12-15 13:15:45.859936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.188 [2024-12-15 13:15:45.859942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.188 [2024-12-15 13:15:45.859950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1202966 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1202966 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1202966 ']' 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.188 13:15:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.188 [2024-12-15 13:15:45.872283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.188 [2024-12-15 13:15:45.872721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.188 [2024-12-15 13:15:45.872740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.188 [2024-12-15 13:15:45.872747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.188 [2024-12-15 13:15:45.872930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.188 [2024-12-15 13:15:45.873104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.188 [2024-12-15 13:15:45.873114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.188 [2024-12-15 13:15:45.873120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.873127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.885309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.885721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.885739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.885746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.885924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.886099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.886112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.886119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.886125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.898290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.898715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.898733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.898741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.898933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.899107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.899117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.899123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.899130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.911335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.911749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.911768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.911775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.911967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.912142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.912154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.912161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.912167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.916075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:38.189 [2024-12-15 13:15:45.916115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.189 [2024-12-15 13:15:45.924437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.924848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.924867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.924875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.925049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.925230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.925239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.925246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.925253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.937479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.937886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.937905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.937913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.938087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.938270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.938280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.938287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.938294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.950920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.951316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.951334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.951342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.951526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.951711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.951724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.951731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.951739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.963974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.964328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.964346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.964353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.964522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.964690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.964699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.964708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.964715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.977069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.977470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.977487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.977495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.977663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.977838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.977848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.977855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.977862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.990104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:45.990370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:45.990388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:45.990395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:45.990568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:45.990741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:45.990751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.189 [2024-12-15 13:15:45.990757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.189 [2024-12-15 13:15:45.990768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.189 [2024-12-15 13:15:45.999993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:38.189 [2024-12-15 13:15:46.003147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.189 [2024-12-15 13:15:46.003580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.189 [2024-12-15 13:15:46.003599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.189 [2024-12-15 13:15:46.003607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.189 [2024-12-15 13:15:46.003780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.189 [2024-12-15 13:15:46.003958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.189 [2024-12-15 13:15:46.003969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.003975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.003982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.016116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.016455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.016474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.016482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.016655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.016835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.016845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.016852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.016859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.021841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.190 [2024-12-15 13:15:46.021869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.190 [2024-12-15 13:15:46.021876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.190 [2024-12-15 13:15:46.021882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.190 [2024-12-15 13:15:46.021887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.190 [2024-12-15 13:15:46.023150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.190 [2024-12-15 13:15:46.023263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.190 [2024-12-15 13:15:46.023263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.190 [2024-12-15 13:15:46.029154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.029464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.029485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.029495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.029675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.029861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.029873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.029881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.029889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.042267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.042636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.042658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.042668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.042852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.043029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.043039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.043047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.043056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.055265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.055661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.055684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.055693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.055876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.056052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.056062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.056070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.056079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.068309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.068635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.068658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.068668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.068852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.069031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.069048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.069056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.069065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.190 [2024-12-15 13:15:46.081435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.190 [2024-12-15 13:15:46.081753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.190 [2024-12-15 13:15:46.081776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.190 [2024-12-15 13:15:46.081784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.190 [2024-12-15 13:15:46.081968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.190 [2024-12-15 13:15:46.082143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.190 [2024-12-15 13:15:46.082153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.190 [2024-12-15 13:15:46.082161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.190 [2024-12-15 13:15:46.082168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 [2024-12-15 13:15:46.094545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.094911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.094931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.094940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.095114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.095290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.095300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.095307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.450 [2024-12-15 13:15:46.095315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 [2024-12-15 13:15:46.107546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.107942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.107962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.107971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.108144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.108317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.108327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.108335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.450 [2024-12-15 13:15:46.108347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.450 [2024-12-15 13:15:46.120572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.120866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.120887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.120897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.121073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.121252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.121263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.121274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.450 [2024-12-15 13:15:46.121282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 [2024-12-15 13:15:46.133654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.133979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.133997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.134006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.134180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.134352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.134362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.134369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.450 [2024-12-15 13:15:46.134376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.450 [2024-12-15 13:15:46.146745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.147034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.147052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.147060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.147243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.147418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.147428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.147435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.450 [2024-12-15 13:15:46.147443] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.450 [2024-12-15 13:15:46.149754] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.450 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.450 [2024-12-15 13:15:46.159820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.450 [2024-12-15 13:15:46.160180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.450 [2024-12-15 13:15:46.160199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.450 [2024-12-15 13:15:46.160206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.450 [2024-12-15 13:15:46.160379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.450 [2024-12-15 13:15:46.160553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.450 [2024-12-15 13:15:46.160562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.450 [2024-12-15 13:15:46.160569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.451 [2024-12-15 13:15:46.160576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.451 [2024-12-15 13:15:46.172799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.451 [2024-12-15 13:15:46.173133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.451 [2024-12-15 13:15:46.173153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.451 [2024-12-15 13:15:46.173162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.451 [2024-12-15 13:15:46.173335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.451 [2024-12-15 13:15:46.173508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.451 [2024-12-15 13:15:46.173517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.451 [2024-12-15 13:15:46.173523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.451 [2024-12-15 13:15:46.173531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.451 [2024-12-15 13:15:46.185926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.451 [2024-12-15 13:15:46.186261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.451 [2024-12-15 13:15:46.186281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.451 [2024-12-15 13:15:46.186293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.451 [2024-12-15 13:15:46.186467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.451 [2024-12-15 13:15:46.186641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.451 [2024-12-15 13:15:46.186651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.451 [2024-12-15 13:15:46.186658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.451 [2024-12-15 13:15:46.186665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.451 Malloc0 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.451 [2024-12-15 13:15:46.199035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.451 [2024-12-15 13:15:46.199451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.451 [2024-12-15 13:15:46.199470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x690490 with addr=10.0.0.2, port=4420 00:35:38.451 [2024-12-15 13:15:46.199478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x690490 is same with the state(6) to be set 00:35:38.451 [2024-12-15 13:15:46.199651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690490 (9): Bad file descriptor 00:35:38.451 [2024-12-15 13:15:46.199832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:38.451 [2024-12-15 13:15:46.199842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:38.451 [2024-12-15 13:15:46.199850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:38.451 [2024-12-15 13:15:46.199856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.451 5090.67 IOPS, 19.89 MiB/s [2024-12-15T12:15:46.358Z] 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.451 [2024-12-15 13:15:46.211839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.451 [2024-12-15 13:15:46.212035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.451 13:15:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1202063 00:35:38.710 [2024-12-15 13:15:46.369017] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:40.342 5699.71 IOPS, 22.26 MiB/s [2024-12-15T12:15:49.626Z] 6424.25 IOPS, 25.09 MiB/s [2024-12-15T12:15:50.562Z] 6961.22 IOPS, 27.19 MiB/s [2024-12-15T12:15:51.498Z] 7394.80 IOPS, 28.89 MiB/s [2024-12-15T12:15:52.434Z] 7760.91 IOPS, 30.32 MiB/s [2024-12-15T12:15:53.370Z] 8070.33 IOPS, 31.52 MiB/s [2024-12-15T12:15:54.307Z] 8332.77 IOPS, 32.55 MiB/s [2024-12-15T12:15:55.245Z] 8560.79 IOPS, 33.44 MiB/s [2024-12-15T12:15:55.503Z] 8745.73 IOPS, 34.16 MiB/s 00:35:47.596 Latency(us) 00:35:47.596 [2024-12-15T12:15:55.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.596 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:47.596 Verification LBA range: start 0x0 length 0x4000 00:35:47.596 Nvme1n1 : 15.05 8719.42 34.06 11271.30 0.00 6366.23 419.35 41943.04 00:35:47.596 [2024-12-15T12:15:55.503Z] =================================================================================================================== 00:35:47.596 [2024-12-15T12:15:55.503Z] Total : 8719.42 34.06 11271.30 0.00 6366.23 419.35 41943.04 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.596 rmmod nvme_tcp 00:35:47.596 rmmod nvme_fabrics 00:35:47.596 rmmod nvme_keyring 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1202966 ']' 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1202966 00:35:47.596 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1202966 ']' 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1202966 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1202966 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1202966' 00:35:47.854 killing process with pid 1202966 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1202966 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1202966 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.854 13:15:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.391 00:35:50.391 real 0m25.878s 00:35:50.391 user 1m0.423s 00:35:50.391 sys 0m6.668s 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:50.391 ************************************ 00:35:50.391 END TEST nvmf_bdevperf 00:35:50.391 ************************************ 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.391 ************************************ 00:35:50.391 START TEST nvmf_target_disconnect 00:35:50.391 ************************************ 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:50.391 * Looking for test storage... 00:35:50.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:35:50.391 13:15:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:50.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.391 --rc genhtml_branch_coverage=1 00:35:50.391 --rc genhtml_function_coverage=1 00:35:50.391 --rc genhtml_legend=1 00:35:50.391 --rc geninfo_all_blocks=1 00:35:50.391 --rc geninfo_unexecuted_blocks=1 00:35:50.391 00:35:50.391 ' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:50.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.391 --rc genhtml_branch_coverage=1 00:35:50.391 --rc genhtml_function_coverage=1 00:35:50.391 --rc genhtml_legend=1 00:35:50.391 --rc geninfo_all_blocks=1 00:35:50.391 --rc geninfo_unexecuted_blocks=1 00:35:50.391 00:35:50.391 ' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:50.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.391 --rc genhtml_branch_coverage=1 00:35:50.391 --rc genhtml_function_coverage=1 00:35:50.391 --rc genhtml_legend=1 00:35:50.391 --rc geninfo_all_blocks=1 00:35:50.391 --rc geninfo_unexecuted_blocks=1 00:35:50.391 00:35:50.391 ' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:50.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.391 --rc genhtml_branch_coverage=1 00:35:50.391 --rc genhtml_function_coverage=1 00:35:50.391 --rc genhtml_legend=1 00:35:50.391 --rc geninfo_all_blocks=1 00:35:50.391 --rc geninfo_unexecuted_blocks=1 00:35:50.391 00:35:50.391 ' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.391 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:50.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.392 13:15:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:56.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:56.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:56.961 Found net devices under 0000:af:00.0: cvl_0_0 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:56.961 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:56.962 Found net devices under 0000:af:00.1: cvl_0_1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:56.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:56.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:35:56.962 00:35:56.962 --- 10.0.0.2 ping statistics --- 00:35:56.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.962 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:56.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:56.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:35:56.962 00:35:56.962 --- 10.0.0.1 ping statistics --- 00:35:56.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:56.962 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.962 13:16:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:56.962 ************************************ 00:35:56.962 START TEST nvmf_target_disconnect_tc1 00:35:56.962 ************************************ 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:56.962 [2024-12-15 13:16:04.116645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.962 [2024-12-15 13:16:04.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1578c50 with addr=10.0.0.2, port=4420 00:35:56.962 [2024-12-15 13:16:04.116715] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:56.962 [2024-12-15 13:16:04.116728] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:56.962 [2024-12-15 13:16:04.116735] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:56.962 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:56.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:56.962 Initializing NVMe Controllers 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:56.962 00:35:56.962 real 0m0.120s 00:35:56.962 user 0m0.049s 00:35:56.962 sys 0m0.070s 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:56.962 ************************************ 00:35:56.962 END TEST nvmf_target_disconnect_tc1 00:35:56.962 ************************************ 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:56.962 ************************************ 00:35:56.962 START TEST nvmf_target_disconnect_tc2 00:35:56.962 ************************************ 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1208035 00:35:56.962 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1208035 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1208035 ']' 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 [2024-12-15 13:16:04.253026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:56.963 [2024-12-15 13:16:04.253071] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.963 [2024-12-15 13:16:04.319281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:56.963 [2024-12-15 13:16:04.343650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.963 [2024-12-15 13:16:04.343688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.963 [2024-12-15 13:16:04.343695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.963 [2024-12-15 13:16:04.343701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.963 [2024-12-15 13:16:04.343707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.963 [2024-12-15 13:16:04.345240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:35:56.963 [2024-12-15 13:16:04.345349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:35:56.963 [2024-12-15 13:16:04.345457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:35:56.963 [2024-12-15 13:16:04.345458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 Malloc0 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 [2024-12-15 13:16:04.516289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 [2024-12-15 13:16:04.545337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1208058 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:56.963 13:16:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:58.876 13:16:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1208035 00:35:58.876 13:16:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:58.876 Read completed with error (sct=0, sc=8) 00:35:58.876 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 [2024-12-15 13:16:06.577250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 [2024-12-15 13:16:06.577444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 [2024-12-15 13:16:06.577635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Read completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.877 Write completed with error (sct=0, sc=8) 00:35:58.877 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Write completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Write completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Write completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Read completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 Write completed with error (sct=0, sc=8) 00:35:58.878 starting I/O failed 00:35:58.878 [2024-12-15 13:16:06.577846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:58.878 [2024-12-15 13:16:06.578050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.578107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.578341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.578378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.578664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.578707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.578958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.579001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.579202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.579235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.579535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.579568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.579835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.579870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.580142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.580176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.580481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.580513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.580712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.580744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.580983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.581006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.581259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.581283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.581476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.581498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.581690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.581712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.581879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.581904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.582077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.582108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.582228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.582260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.582471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.582504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.582721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.582755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.582996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.583190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.583222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.583416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.583449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.583708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.583741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.583989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.584013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.584184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.584211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.584433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.584465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.584724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.584757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.585038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.585062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.585191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.585213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.585456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.585479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.585676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.585699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.585968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.585992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.586178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.586443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.878 [2024-12-15 13:16:06.586467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.878 qpair failed and we were unable to recover it. 00:35:58.878 [2024-12-15 13:16:06.586639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.586853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.586877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.587115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.587139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.587385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.587408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.587684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.587708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.587901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.587925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.588141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.588164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.588407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.588429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.588674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.588697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.588928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.588953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.589238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.589282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.589464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.589497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.589761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.589794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.589998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.590030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.590217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.590246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.590393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.590425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.590624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.590656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.590875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.590909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.591204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.591234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.591499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.591529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.591737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.591767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.591974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.592230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.592263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.592502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.592535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.592801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.592842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.593038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.593070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.593243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.593276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.593545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.593578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.593848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.593883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.594168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.594198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.594387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.594422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.594623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.594653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.594941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.595195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.595224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.879 [2024-12-15 13:16:06.595454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.879 [2024-12-15 13:16:06.595484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.879 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.595668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.595697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.595938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.595970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.596207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.596238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.596484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.596514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.596713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.596743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.596917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.596948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.597135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.597168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.597422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.597455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.597636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.597668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.597914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.597950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.598201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.598233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.598474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.598507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.598787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.598987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.599022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.599285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.599318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.599543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.599576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.599846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.599881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.600075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.600108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.600270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.600409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.600441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.600705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.600738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.600998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.601032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.601213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.601245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.601374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.601407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.601672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.601704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.601903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.601937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.602081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.602114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.602376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.602408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.602689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.602722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.602999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.603033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.603230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.603264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.603462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.603494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.603719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.603847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.603882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.604123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.604157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.604444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.604482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.604745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.604779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.604926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.880 [2024-12-15 13:16:06.604960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.880 qpair failed and we were unable to recover it. 00:35:58.880 [2024-12-15 13:16:06.605198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.605230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.605470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.605743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.605775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.606094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.606129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.606417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.606450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.606639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.606673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.606868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.606902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.607139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.607172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.607345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.607377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.607589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.607622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.607760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.607794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.607927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.607961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.608156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.608187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.608328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.608600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.608632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.608846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.608881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.609139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.609171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.609439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.609473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.609655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.609944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.609979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.610220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.610253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.610536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.610570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.610844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.610879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.611089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.611122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.611269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.611303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.611619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.611652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.611894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.611929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.881 [2024-12-15 13:16:06.612194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.881 [2024-12-15 13:16:06.612227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.881 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.612510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.612770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.613008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.613043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.613252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.613285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.613545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.613578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.613838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.613873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.614057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.614091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.614355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.614388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.614571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.614604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.614795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.614845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.615110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.615143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.615317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.615349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.615631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.615664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.615927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.616221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.616254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.616495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.616529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.616733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.616766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.616917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.616951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.617222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.617255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.617559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.617592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.617840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.617874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.618081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.618114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.618287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.618543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.618576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.618751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.618784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.619041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.619077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.619272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.619304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.619568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.619601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.619787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.619820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.620086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.620120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.620310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.620343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.620543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.620577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.620785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.620817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.621080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.621115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.621337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.621371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.882 [2024-12-15 13:16:06.621569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.882 [2024-12-15 13:16:06.621602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.882 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.621938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.622118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.622151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.622406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.622439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.622628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.622661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.622842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.622877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.623116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.623380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.623413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.623623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.623657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.623869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.623903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.624159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.624193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.624367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.624602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.624635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.624878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.624913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.625030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.625069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.625334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.625367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.625634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.625667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.625851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.625887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.626209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.626242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.626505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.626539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.626656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.626689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.626866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.626900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.627141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.627175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.627299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.627331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.627458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.627491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.627672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.627705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.627905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.627938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.628143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.628177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.628446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.628479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.628675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.628708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.628864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.628900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.629201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.629234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.629483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.629515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.629805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.629849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.630042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.630076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.630335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.630369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.630659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.630692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.630959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.630995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.883 qpair failed and we were unable to recover it. 00:35:58.883 [2024-12-15 13:16:06.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.883 [2024-12-15 13:16:06.631316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.631588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.631622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.631860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.631896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.632025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.632059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.632320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.632354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.632536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.632569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.632807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.632967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.632999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.633267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.633300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.633481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.633516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.633698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.633731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.633997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.634033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.634277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.634311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.634438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.634470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.634719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.634753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.634946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.634982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.635223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.635261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.635558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.635592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.635849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.635884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.636164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.636197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.636463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.636498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.636758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.636791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.636932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.636966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.637187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.637221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.637491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.637525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.637732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.637765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.638025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.638060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.638185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.638217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.638482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.638515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.638716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.638750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.639003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.639039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.639223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.639256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.639535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.639670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.639703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.639901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.639936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.640222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.640256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.640546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.640579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.640773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.640806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.641096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.884 [2024-12-15 13:16:06.641130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.884 qpair failed and we were unable to recover it. 00:35:58.884 [2024-12-15 13:16:06.641387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.641421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.641633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.641666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.641903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.641939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.642117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.642150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.642377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.642410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.642851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.642885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.643142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.643176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.643412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.643444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.643738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.643771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.644040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.644075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.644344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.644377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.644591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.644625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.644814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.644856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.645150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.645184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.645462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.645737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.645770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.645974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.646016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.646210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.646243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.646493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.646747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.646780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.647084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.647119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.647294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.647327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.647577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.647609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.647909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.648203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.648237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.648436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.648468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.648734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.648766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.649084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.649385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.649418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.649706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.649739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.650010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.650046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.650320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.650353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.650633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.650666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.650891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.651157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.651207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.651405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.651438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.651682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.651715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.885 [2024-12-15 13:16:06.651985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.885 [2024-12-15 13:16:06.652020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.885 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.652297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.652607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.652640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.652867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.652901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.653094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.653128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.653320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.653354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.653557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.653592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.653775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.653808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.654066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.654100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.654343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.654376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.654617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.654650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.654910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.654944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.655183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.655216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.655387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.655420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.655673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.655708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.655900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.655936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.656180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.656213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.656508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.656541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.656773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.656806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.657046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.657086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.657274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.657308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.657560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.657594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.657867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.657902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.658097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.658130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.658307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.658522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.658556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.658838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.658872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.659006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.659040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.659308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.659342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.659602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.659635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.659816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.659862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.660061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.660094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.886 qpair failed and we were unable to recover it. 00:35:58.886 [2024-12-15 13:16:06.660364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.886 [2024-12-15 13:16:06.660396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.660523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.660557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.660845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.660880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.661123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.661157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.661412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.661446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.661731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.661764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.661993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.662029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.662276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.662309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.662417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.662449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.662581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.662613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.662899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.662936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.663183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.663216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.663395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.663428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.663557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.663590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.663864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.663900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.664076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.664109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.664329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.664362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.664635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.664669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.664866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.664902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.665159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.665480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.665513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.665783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.665817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.666035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.666070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.666378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.666411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.666664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.666697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.666924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.666960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.667166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.667199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.667397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.667436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.667631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.667664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.667933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.667967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.668229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.668263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.668515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.668548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.668793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.668835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.669130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.669164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.669297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.669330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.669619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.669652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.669790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.669823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.670033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.670067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.887 [2024-12-15 13:16:06.670172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.887 [2024-12-15 13:16:06.670204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.887 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.670384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.670417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.670679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.670999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.671035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.671220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.671253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.671495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.671528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.671818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.671862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.672153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.672459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.672491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.672711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.672745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.673000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.673035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.673305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.673339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.673614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.673648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.673846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.673881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.674153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.674186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.674404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.674436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.674713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.674748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.674948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.674985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.675197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.675231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.675525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.675558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.675831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.675866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.676087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.676372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.676405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.676667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.676701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.676936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.676971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.677168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.677201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.677381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.677414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.677610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.677644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.677852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.677888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.678139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.678178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.678373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.678406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.678682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.678717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.678938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.678974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.679221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.679254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.679519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.679552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.679730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.679763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.679962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.679997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.680176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.680209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.680423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.680456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.680658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.888 [2024-12-15 13:16:06.680691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.888 qpair failed and we were unable to recover it. 00:35:58.888 [2024-12-15 13:16:06.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.680977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.681274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.681307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.681521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.681554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.681762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.681797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.682133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.682168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.682466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.682500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.682707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.682740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.682926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.682962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.683172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.683207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.683430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.683624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.683657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.683850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.683886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.684086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.684118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.684311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.684344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.684597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.684630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.684929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.684964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.685150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.685186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.685459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.685684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.686017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.686239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.686272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.686567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.686600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.686859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.686895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.687154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.687187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.687486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.687519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.687659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.687694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.687919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.687953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.688134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.688168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.688433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.688467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.688716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.688756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.688907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.688943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.689148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.689182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.689453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.689487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.689661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.689694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.689905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.689941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.690159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.690193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.690473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.690507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.690719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.690753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.691030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.691066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.889 [2024-12-15 13:16:06.691347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.889 [2024-12-15 13:16:06.691380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.889 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.691636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.691670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.691930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.691965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.692148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.692182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.692463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.692496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.692700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.692734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.693036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.693072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.693289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.693322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.693503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.693537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.693835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.693871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.693991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.694025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.694238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.694271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.694563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.694597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.694856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.694892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.695141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.695175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.695363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.695396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.695693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.695727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.695953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.695988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.696259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.696292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.696519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.696552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.696684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.696718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.696944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.696980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.697233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.697267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.697492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.697744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.697778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.698088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.698124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.698379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.698413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.698622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.698656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.698880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.698917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.699098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.699130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.699313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.699353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.699554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.699588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.699862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.890 [2024-12-15 13:16:06.699896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.890 qpair failed and we were unable to recover it. 00:35:58.890 [2024-12-15 13:16:06.700202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.700235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.700440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.700473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.700754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.700788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.701102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.701139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.701342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.701376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.701672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.701704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.701918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.701954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.702145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.702178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.702434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.702468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.702677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.702710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.702983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.703019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.703365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.703567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.703601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.703851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.703886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.704072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.704107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.704308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.704342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.704562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.704596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.704875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.704910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.705194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.705228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.705502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.705686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.705720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.706024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.706059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.706378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.706492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.706526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.706716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.706751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.706954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.706990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.707267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.707300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.707518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.707842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.707877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.708147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.708181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.708405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.708438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.708680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.708714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.708985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.709021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.709236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.709269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.709545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.709579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.709720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.709753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.709961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.710023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.710219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.710259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.710533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.710566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.891 [2024-12-15 13:16:06.710840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.891 [2024-12-15 13:16:06.710876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.891 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.711167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.711201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.711458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.711492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.711794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.711839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.712038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.712072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.712349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.712382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.712607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.712640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.712952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.713187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.713222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.713476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.713509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.713782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.713817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.714110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.714413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.714448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.714724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.714758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.715040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.715077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.715300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.715334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.715589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.715623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.715820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.715865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.716121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.716154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.716446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.716479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.716633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.716667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.716948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.716985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.717220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.717255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.717518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.717552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.717847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.717882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.718084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.718119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.718322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.718356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.718577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.718803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.719102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.719137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.719401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.719435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.719728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.719762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.720033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.720068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.720361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.720395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.720663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.720698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.720991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.721296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.721330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.892 [2024-12-15 13:16:06.721528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.892 [2024-12-15 13:16:06.721562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.892 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.721859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.721896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.722198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.722234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.722510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.722544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.722768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.722802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.723092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.723127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.723399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.723433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.723701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.723734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.724034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.724069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.724332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.724366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.724570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.724603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.724878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.724914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.725207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.725241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.725512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.725545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.725749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.725783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.726026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.726063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.726365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.726399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.726674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.726708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.726846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.726882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.727069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.727103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.727286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.727320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.727453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.727735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.727768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.727995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.728030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.728301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.728336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.728518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.728553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.728823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.728870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.729089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.729123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.729350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.729390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.729647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.729681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.729890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.729928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.730126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.730162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.730347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.730379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.730668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.730702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.730887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.730923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.731175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.731209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.731353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.731386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.731606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.731639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.731836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.731871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.732069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.893 [2024-12-15 13:16:06.732102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.893 qpair failed and we were unable to recover it. 00:35:58.893 [2024-12-15 13:16:06.732367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.732402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.732582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.732615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.732807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.733066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.733100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.733323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.733357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.733542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.733849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.733885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.734041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.734363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.734397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.734626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.734661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.734879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.734914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.735190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.735224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.735511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.735545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.735822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.735865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.736091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.736125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.736317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.736351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.736652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.736685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.736880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.736916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.737115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.737150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.737426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.737460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.737761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.737796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.738031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.738067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.738321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.738356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.738639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.738672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.738950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.738987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.739272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.739307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.739579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.739613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.739904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.739939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.740124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.740164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.740364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.740398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.740603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.740638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.740847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.740884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.741135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.741169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.741425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.741459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.741731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.741765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.742077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.894 [2024-12-15 13:16:06.742112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.894 qpair failed and we were unable to recover it. 00:35:58.894 [2024-12-15 13:16:06.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.742405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.742627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.742660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.742933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.742969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.743251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.743285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.743567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.743600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.743883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.743920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.744052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.744087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.744268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.744302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.744503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.744539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.744793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.744838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.745123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.745156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.745430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.745464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.745666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.745700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.745904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.745940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.746212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.746246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.746448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.746482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.746737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.746771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.746912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.746948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.747134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.747167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.747447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.747481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.747749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.747784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.748081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.748378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.748412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.748600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.748634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.749179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.749430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.749464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.749661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.749695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.749938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.750194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.750534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.750568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.750836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.750872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.895 [2024-12-15 13:16:06.751075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.895 [2024-12-15 13:16:06.751115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.895 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.751371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.751404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.751600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.751634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.751913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.751949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.752157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.752191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.752490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.752524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.752740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.752773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.753064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.753100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.753357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.753391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.753684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.753718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.753993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.754030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.754321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.754354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.754561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.754738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.754774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.755083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.755119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.755323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.755564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.755598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.755800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.755845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.755987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.756022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.756250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.756284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.756621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.756910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.756944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.757163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.757197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.757318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.757353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.757581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.757614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.757887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.757924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.758134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.758169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.758463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.758497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.758762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.758796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.896 [2024-12-15 13:16:06.759033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.896 [2024-12-15 13:16:06.759067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.896 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.759218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.759251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.759384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.759418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.759614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.759646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.759848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.759882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.760089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.760122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.760376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.760408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.760625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.760658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.760880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.761015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.761048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.761256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.761289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.761479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.761518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.761850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.762110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.762145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.762436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.762469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.762602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.762637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.762858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.762895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.763156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.763190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.763393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.763427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.763700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.763734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.763891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.764172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.764476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.764509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.764730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.764764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.765039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.765074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.765339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.765373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.765653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.765687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.765956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.765994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.766186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.766222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.766505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.766540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.766687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.766722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.766923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.766959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.767145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.767179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.767390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.767424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.767640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.767673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.767869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.767903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.768174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.768209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.768409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.768443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.768759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.768794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.769030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.897 [2024-12-15 13:16:06.769066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.897 qpair failed and we were unable to recover it. 00:35:58.897 [2024-12-15 13:16:06.769274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.769309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.769587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.769623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.769819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.769883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.770041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.770075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.770558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.770594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.770814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.770863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.771071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.771354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.771387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.771589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.771624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.771822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.771871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.772092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.772133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.772413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.772448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.772647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.772936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.772973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.773132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.773166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.773373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.773408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:58.898 [2024-12-15 13:16:06.773617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:58.898 [2024-12-15 13:16:06.773651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:58.898 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.773796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.773839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.774066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.774104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.774330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.774362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.774659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.774695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.774957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.774994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.775250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.775284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.775439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.775475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.775667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.775700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.775836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.775871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.776034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.776069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.776274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.776308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.776634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.776953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.776989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.777138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.777172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.777372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.174 [2024-12-15 13:16:06.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.174 qpair failed and we were unable to recover it. 00:35:59.174 [2024-12-15 13:16:06.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.777719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.777980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.778016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.778163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.778197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.778404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.778439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.778627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.778950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.778987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.779106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.779141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.779396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.779430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.779569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.779602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.779796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.779838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.780115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.780153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.780395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.780647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.780681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.780979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.781231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.781265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.781481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.781515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.781793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.781851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.782051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.782084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.782293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.782332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.782537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.782570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.782850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.782885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.783141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.783174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.783377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.783409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.783686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.783720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.783908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.783943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.784145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.784179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.784317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.784351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.784642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.784677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.784973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.785009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.785149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.785183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.785437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.785471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.785748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.785782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.786027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.786064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.786207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.786241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.786481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.175 qpair failed and we were unable to recover it. 00:35:59.175 [2024-12-15 13:16:06.786817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.175 [2024-12-15 13:16:06.786862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.787045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.787079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.787356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.787390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.787589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.787623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.787933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.787968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.788240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.788274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.788582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.788615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.788897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.788933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.789162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.789197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.789398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.789431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.789713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.789748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.790062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.790098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.790344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.790378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.790649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.790682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.790899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.790935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.791155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.791188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.791492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.791526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.791788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.791822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.792063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.792097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.792375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.792410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.792604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.792639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.792846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.792882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.793060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.793341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.793380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.793508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.793543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.793848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.793884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.794164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.794198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.794435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.794469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.794690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.794892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.794928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.795116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.795151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.795281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.795315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.795627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.795661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.795968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.796004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.796251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.796284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.796509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.176 [2024-12-15 13:16:06.796543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.176 qpair failed and we were unable to recover it. 00:35:59.176 [2024-12-15 13:16:06.796797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.796841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.797135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.797170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.797378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.797413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.797684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.797718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.798034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.798298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.798581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.798616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.798874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.798910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.799206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.177 [2024-12-15 13:16:06.799240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.177 qpair failed and we were unable to recover it. 00:35:59.177 [2024-12-15 13:16:06.799490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.799524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.799822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.799867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.800026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.800059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.800242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.800276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.800504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.800538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.800848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.800884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.801083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.801117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.801327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.801360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.801587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.801621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.801843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.801880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.802025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.802058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.802328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.802362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.802649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.802684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.802820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.802868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.803150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.803184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.803356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.803527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.803772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.803805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.803997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.804040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.804227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.804259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.179 [2024-12-15 13:16:06.804537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.179 [2024-12-15 13:16:06.804572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.179 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.804780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.804812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.805080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.805301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.805333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.805551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.805583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.805806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.806100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.806133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.806474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.806750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.806783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.807076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.807112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.807416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.807451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.807605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.807638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.807898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.807934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.808247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.808281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.808493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.808526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.808796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.808839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.809041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.809075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.809378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.809412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.809617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.809653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.809951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.809986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.810189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.810478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.810512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.810644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.810960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.810995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.811204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.811237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.811519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.811553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.811781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.811815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.812035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.812070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.812352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.812385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.812666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.812700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.812955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.812991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.813296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.813330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.813535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.813569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.813791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.813845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.814122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.814157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.814424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.814458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.814791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.815006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.815042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.815322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.815362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.180 [2024-12-15 13:16:06.815549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.180 [2024-12-15 13:16:06.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.180 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.815785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.815819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.816115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.816150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.816333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.816367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.816513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.816546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.816840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.816876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.817166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.817201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.817404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.817437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.817712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.818003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.818041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.818300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.818334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.818631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.818665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.818869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.818905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.819105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.819139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.819333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.819367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.819622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.819657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.819954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.819989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.820127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.820160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.820270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.820304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.820510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.820741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.820774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.821059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.821094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.821278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.821310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.821515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.821548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.821820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.821865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.822144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.822179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.822448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.822484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.822625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.822659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.822940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.822976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.823205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.823240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.823518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.823552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.823842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.823877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.824148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.181 [2024-12-15 13:16:06.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.181 qpair failed and we were unable to recover it. 00:35:59.181 [2024-12-15 13:16:06.824371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.824405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.824600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.824634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.824913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.824949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.825232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.825265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.825543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.825577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.825770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.825804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.826082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.826123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.826431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.826464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.826755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.826788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.827026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.827062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.827277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.827312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.827493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.827527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.827798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.827998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.828033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.828243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.828276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.828390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.828423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.828633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.828666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.828941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.828977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.829261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.829295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.829523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.829557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.829857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.829894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.830137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.830172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.830478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.830510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.830772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.830805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.831074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.831110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.831334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.831367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.831644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.831679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.831896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.832021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.832055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.832302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.832335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.832584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.832618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.832962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.833242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.833590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.833625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.833757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.833791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.833999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.834035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.834256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.182 [2024-12-15 13:16:06.834291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.182 qpair failed and we were unable to recover it. 00:35:59.182 [2024-12-15 13:16:06.834507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.834539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.834815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.834861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.835086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.835121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.835310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.835343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.835550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.835587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.835795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.836084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.836119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.836377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.836411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.836606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.836640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.836787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.836840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.837122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.837155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.837442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.837476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.837785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.837821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.838124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.838159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.838292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.838585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.838618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.838897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.838934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.839129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.839163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.839344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.839378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.839596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.839630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.839900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.839935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.840193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.840228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.840429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.840463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.840744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.840779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.841085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.841122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.841330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.841641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.841677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.841949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.841986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.842190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.842224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.842468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.842501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.842699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.842732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.842989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.843026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.843231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.843265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.843494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.843529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.843813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.843873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.844172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.844208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.844474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.844510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.844750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.845015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.183 [2024-12-15 13:16:06.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.183 qpair failed and we were unable to recover it. 00:35:59.183 [2024-12-15 13:16:06.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.845297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.845587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.845712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.845746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.845883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.845918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.846196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.846230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.846500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.846535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.846844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.846880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.847177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.847405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.847439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.847703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.848030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.848072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.848277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.848311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.848518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.848552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.848819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.848863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.849138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.849175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.849455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.849490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.849643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.849678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.849865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.849900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.850090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.850124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.850305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.850340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.850594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.850629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.850823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.850881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.851167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.851201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.851427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.851463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.851714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.851751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.851969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.852006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.852311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.852345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.852534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.852570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.852852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.852888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.853103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.853139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.853421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.853455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.853574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.853607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.853772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.853909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.853944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.854197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.854232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.854423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.854456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.854644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.854680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.184 [2024-12-15 13:16:06.854883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.184 [2024-12-15 13:16:06.854919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.184 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.855204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.855238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.855512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.855547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.855781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.855815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.856052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.856087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.856354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.856389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.856663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.856697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.856915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.856952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.857207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.857240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.857375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.857411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.857708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.857896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.857930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.858133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.858168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.858378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.858413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.858665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.858850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.858886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.859003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.859037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.859293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.859328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.859600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.859633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.859848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.859885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.860039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.860075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.860330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.860365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.860635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.860671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.860949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.860985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.861207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.861241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.861527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.861562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.861844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.861880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.862022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.862057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.862283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.862317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.862622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.862655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.862856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.862892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.863171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.863204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.863400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.863434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.863638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.863673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.863860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.863894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.864144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.864178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.185 qpair failed and we were unable to recover it. 00:35:59.185 [2024-12-15 13:16:06.864677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.185 [2024-12-15 13:16:06.864712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.865014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.865052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.865352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.865384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.865657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.865696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.865984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.866020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.866324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.866451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.866487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.866741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.866774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.867072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.867109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.867373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.867621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.867656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.867847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.867882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.868102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.868136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.868360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.868394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.868590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.868624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.868902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.868938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.869086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.869119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.869380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.869601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.869635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.869847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.869885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.870077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.870112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.870387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.870423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.870627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.870895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.870931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.871190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.871225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.871456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.871490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.871650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.871683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.871805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.871850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.872157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.872192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.872377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.872543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.872577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.186 [2024-12-15 13:16:06.872843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.186 [2024-12-15 13:16:06.872879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.186 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.873161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.873194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.873398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.873433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.873616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.873652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.873931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.873967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.874149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.874183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.874452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.874686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.874720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.874987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.875023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.875279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.875313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.875520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.875552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.875675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.875708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.875984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.876032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.876297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.876331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.876460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.876494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.876697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.876731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.876855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.876889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.877165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.877197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.877384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.877418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.877604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.877637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.877847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.877884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.878157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.878191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.878472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.878506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.878642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.878678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.878868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.878905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.879024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.879336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.879372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.879565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.879598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.879864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.879901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.880159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.880390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.880426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.880699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.880732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.880947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.880983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.881240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.881273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.881551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.881584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.881867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.881903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.882222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.882257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.882552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.187 [2024-12-15 13:16:06.882586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.187 qpair failed and we were unable to recover it. 00:35:59.187 [2024-12-15 13:16:06.882739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.882774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.883000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.883038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.883234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.883268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.883544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.883577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.883728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.883762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.884050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.884088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.884312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.884346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.884628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.884661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.884911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.884950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.885256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.885290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.885515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.885550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.885765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.885798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.886022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.886059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.886314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.886348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.886608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.886648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.886906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.886945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.887237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.887271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.887510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.887785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.887819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.888106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.888139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.888326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.888360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.888567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.888732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.888765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.888967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.889002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.889296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.889331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.889537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.889571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.889709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.889743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.890024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.890061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.890276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.890311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.890533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.890569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.890790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.890852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.891139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.891175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.891431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.891465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.891600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.891634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.891965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.892002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.892233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.892266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.892582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.892879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.892916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.188 [2024-12-15 13:16:06.893055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.188 [2024-12-15 13:16:06.893088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.188 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.893200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.893233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.893506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.893540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.893821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.893870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.894069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.894105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.894365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.894400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.894684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.894719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.894940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.894977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.895231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.895266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.895453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.895486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.895678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.895713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.895983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.896019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.896240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.896274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.896550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.896584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.896846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.896881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.897184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.897219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.897501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.897540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.897727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.897760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.897980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.898015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.898293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.898326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.898549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.898584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.898862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.898898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.899140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.899173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.899362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.899395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.899584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.899618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.899862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.900117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.900151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.900336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.900369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.900586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.900620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.900879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.900915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.901122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.901156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.901416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.901451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.901577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.901612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.901890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.901925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.902178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.902212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.902471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.902506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.902744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.902778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.903107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.903143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.903398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.903433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.189 [2024-12-15 13:16:06.903639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.189 [2024-12-15 13:16:06.903673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.189 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.903941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.903979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.904111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.904145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.904336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.904369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.904564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.904598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.904835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.904872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.905025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.905058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.905348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.905616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.905650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.905932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.905967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.906186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.906219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.906415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.906707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.906743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.907043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.907078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.907327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.907360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.907679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.907713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.907907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.907943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.908075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.908116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.908396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.908430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.908712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.908746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.908937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.908972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.909226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.909260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.909473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.909507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.909789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.909951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.910173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.910207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.910412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.910445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.910642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.910676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.910867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.910903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.911175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.911211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.911401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.911434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.911717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.911753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.911949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.911985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.912179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.190 [2024-12-15 13:16:06.912212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.190 qpair failed and we were unable to recover it. 00:35:59.190 [2024-12-15 13:16:06.912436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.912470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.912733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.913073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.913108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.913224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.913257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.913450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.913482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.913623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.913657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.913862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.913899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.914042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.914078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.914284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.914321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.914580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.914615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.914739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.914772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.915004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.915042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.915318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.915353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.915545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.915578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.915849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.915885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.916179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.916214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.916402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.916436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.916691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.916727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.916920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.916958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.917253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.917288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.917585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.917791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.917835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.918062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.918097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.918305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.918346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.918552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.918586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.918732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.918981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.919016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.919132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.919164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.919355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.919390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.919653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.919687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.919888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.919924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.920108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.920141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.920350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.920383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.920582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.920615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.920882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.920919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.921172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.921207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.921413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.921447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.921720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.921755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.921897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.191 [2024-12-15 13:16:06.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.191 qpair failed and we were unable to recover it. 00:35:59.191 [2024-12-15 13:16:06.922138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.922171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.922375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.922408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.922662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.922696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.922945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.922981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.923289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.923323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.923539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.923574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.923785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.923821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.924156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.924190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.924395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.924430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.924655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.924689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.924966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.925004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.925213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.925250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.925473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.925506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.925763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.925797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.926014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.926049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.926306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.926340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.926601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.926638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.926850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.926886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.927019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.927052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.927269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.927302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.927579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.927614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.928075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.928109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.928389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.928423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.928699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.928744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.929024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.929061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.929278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.929311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.929504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.929537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.929844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.929879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.930157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.930191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.930387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.930422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.930697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.930731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.930916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.930952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.931240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.931274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.931463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.931498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.931683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.931718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.931859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.931894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.932126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.932159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.192 [2024-12-15 13:16:06.932456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.192 qpair failed and we were unable to recover it. 00:35:59.192 [2024-12-15 13:16:06.932649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.932682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.932804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.932853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.933130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.933166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.933433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.933467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.933762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.933796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.934004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.934039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.934343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.934377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.934632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.934893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.934930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.935186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.935222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.935509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.935543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.935739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.935774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.936066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.936101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.936385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.936418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.936644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.936678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.936929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.936965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.937156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.937189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.937461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.937496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.937621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.937655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.938106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.938139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.938342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.938378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.938522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.938555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.938823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.938873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.939149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.939183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.939441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.939480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.939632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.939821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.939870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.940122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.940156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.940437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.940472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.940602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.940635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.940932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.940969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.941235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.941271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.941557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.941590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.941785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.941818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.942130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.942165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.942418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.942453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.942790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.943028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.193 [2024-12-15 13:16:06.943064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.193 qpair failed and we were unable to recover it. 00:35:59.193 [2024-12-15 13:16:06.943267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.943302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.943585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.943620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.943847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.944129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.944164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.944297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.944332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.944536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.944571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.944761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.944796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.945015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.945051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.945236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.945272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.945456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.945671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.945705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.945988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.946025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.946322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.946356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.946673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.946709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.946930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.947157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.947191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.947448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.947482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.947687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.947723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.947936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.947972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.948230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.948263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.948449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.948484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.948764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.948800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.948964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.948999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.949196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.949230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.949427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.949461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.949734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.949768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.949915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.949958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.950096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.950129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.950407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.950442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.950666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.950933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.950970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.951160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.951194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.951379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.951411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.951690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.951723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.951930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.951964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.952217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.952253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.952528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.952561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.194 qpair failed and we were unable to recover it. 00:35:59.194 [2024-12-15 13:16:06.952746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.194 [2024-12-15 13:16:06.952780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.953047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.953081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.953301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.953529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.953563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.953866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.953902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.954109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.954143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.954420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.954455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.954680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.954716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.954850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.954888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.955142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.955489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.955774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.955808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.956095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.956129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.956334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.956367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.956564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.956850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.956885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.957045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.957170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.957204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.957324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.957358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.957500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.957533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.957788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.957833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.958122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.958157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.958439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.958477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.958663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.958697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.958886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.958923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.959038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.959271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.959548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.959734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.959768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.960031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.960074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.960272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.960308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.960446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.960482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.960662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.960697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.960936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.195 qpair failed and we were unable to recover it. 00:35:59.195 [2024-12-15 13:16:06.961215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.195 [2024-12-15 13:16:06.961249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.961507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.961542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.961691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.961725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.961919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.961954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.962158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.962191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.962446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.962479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.962762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.962797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.962924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.962958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.963154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.963187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.963454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.963487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.963683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.963718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.963910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.963945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.964200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.964234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.964536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.964570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.964844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.964880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.965164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.965198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.965505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.965541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.965807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.965853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.966134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.966168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.966381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.966638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.966673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.966932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.966968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.967266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.967301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.967494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.967528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.967803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.967851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.968059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.968277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.968312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.968498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.968532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.968803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.968850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.969049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.969365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.969399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.969700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.969733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.969917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.969953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.970261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.970467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.970688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.970729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.970949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.970984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.971168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.971204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.971331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.971365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.196 [2024-12-15 13:16:06.971641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.196 [2024-12-15 13:16:06.971675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.196 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.971808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.971850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.971982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.972014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.972221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.972255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.972442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.972476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.972726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.972931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.972967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.973105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.973138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.973342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.973375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.973657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.973691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.973848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.973883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.974170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.974203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.974470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.974503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.974714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.974749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.974954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.974993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.975203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.975238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.975443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.975476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.975671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.975704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.975818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.975867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.976146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.976363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.976397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.976607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.976641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.976914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.976950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.977184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.977218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.977526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.977559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.977804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.977847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.977979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.978012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.978221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.978476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.978511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.978847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.978882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.979180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.979216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.979344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.979378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.979682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.979716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.979982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.980018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.980273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.980307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.980519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.980553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.197 [2024-12-15 13:16:06.980740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.197 [2024-12-15 13:16:06.980787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.197 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.981074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.981111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.981303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.981337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.981640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.981675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.981941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.982272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.982306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.982573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.982607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.982798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.982855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.982973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.983007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.983287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.983321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.983576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.983610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.983793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.983842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.984055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.984088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.984369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.984403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.984606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.984640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.984850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.984887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.985071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.985106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.985389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.985423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.985676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.985711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.985965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.986000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.986275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.986309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.986595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.986752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.986786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.986976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.987011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.987228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.987261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.987536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.987570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.987834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.987869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.988018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.988053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.988249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.988282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.988466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.988499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.988727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.988761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.988977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.989012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.989300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.989572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.989606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.989812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.989859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.990087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.990121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.990377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.990410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.990618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.990652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.990926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.990963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.991162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.198 [2024-12-15 13:16:06.991195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.198 qpair failed and we were unable to recover it. 00:35:59.198 [2024-12-15 13:16:06.991398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.991432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.991653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.991688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.991973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.992009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.992186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.992221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.992525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.992560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.992743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.992777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.992990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.993244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.993278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.993578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.993612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.993895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.993931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.994165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.994470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.994504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.994786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.994820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.995064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.995098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.995375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.995409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.995638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.995671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.995948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.995985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.996239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.996273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.996552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.996587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.996845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.996881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.997094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.997128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.997399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.997433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.997705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.997740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.998063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.998098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.998413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.998633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.998666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.998896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.998932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.999214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.999254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.999455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.999488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.999677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.999711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.199 [2024-12-15 13:16:06.999848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.199 [2024-12-15 13:16:06.999883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.199 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.000082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.000116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.000429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.000683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.000717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.000913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.000949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.001245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.001430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.001465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.001719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.001753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.001954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.002244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.002278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.002603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.002637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.002936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.002972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.003168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.003201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.003458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.003492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.003746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.003780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.004055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.004092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.004300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.004334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.004584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.004617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.004896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.004934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.005050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.005084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.005363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.005397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.005674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.005708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.005996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.006031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.006184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.006218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.006442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.006477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.006668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.006703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.006960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.006996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.007194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.007228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.007500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.007534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.007795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.007839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.008053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.008088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.008368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.008402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.008588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.008621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.008837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.008872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.009122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.009393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.009426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.009721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.009755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.009891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.200 [2024-12-15 13:16:07.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.200 qpair failed and we were unable to recover it. 00:35:59.200 [2024-12-15 13:16:07.010213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.010247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.010521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.010555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.010792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.010838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.011044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.011077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.011338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.011371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.011596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.011631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.011835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.011870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.012121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.012155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.012459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.012493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.012685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.012718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.012972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.013008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.013200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.013235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.013490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.013524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.013730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.013764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.014036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.014072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.014265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.014298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.014554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.014588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.014770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.014804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.015100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.015134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.015326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.015360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.015625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.015857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.015894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.016170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.016205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.016508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.016541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.016801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.016843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.017055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.017394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.017427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.017687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.017721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.018048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.018322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.018356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.018491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.018525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.018730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.018764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.019056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.019091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.019270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.019524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.019558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.019852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.019889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.020174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.020209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.020481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.020517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.020810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.020853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.201 qpair failed and we were unable to recover it. 00:35:59.201 [2024-12-15 13:16:07.021076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.201 [2024-12-15 13:16:07.021116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.021370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.021404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.021704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.021738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.022028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.022064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.022271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.022306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.022578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.022612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.022800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.022851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.023132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.023166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.023370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.023405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.023682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.023717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.023964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.024000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.024300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.024645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.024860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.024896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.025103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.025136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.025412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.025447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.025715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.025749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.026034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.026070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.026293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.026326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.026601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.026635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.026866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.027105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.027142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.027426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.027460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.027736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.027769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.027976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.028013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.028206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.028240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.028494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.028528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.028737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.028772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.029041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.029077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.029361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.029395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.029619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.029653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.029853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.029890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.030088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.030122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.030374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.030408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.030685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.030718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.031010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.031237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.031271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.031477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.031511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.202 [2024-12-15 13:16:07.031812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.202 [2024-12-15 13:16:07.031858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.202 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.032076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.032359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.032397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.032596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.032630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.032815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.032858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.033116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.033374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.033591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.033625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.033905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.033941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.034151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.034185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.034362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.034396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.034670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.034704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.034909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.034944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.035540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.035573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.035760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.035795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.036078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.036113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.036374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.036705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.036997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.037032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.037335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.037369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.037655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.037690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.037894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.037931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.038203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.038237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.038351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.038642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.038676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.038864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.038900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.039095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.039129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.039313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.039347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.039629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.039662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.039917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.039954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.040141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.040175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.040428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.040461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.040654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.040688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.040893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.040930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.041186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.041220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.041528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.041844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.041880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.203 qpair failed and we were unable to recover it. 00:35:59.203 [2024-12-15 13:16:07.042074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.203 [2024-12-15 13:16:07.042108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.042298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.042332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.042600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.042634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.042839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.042874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.043072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.043112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.043337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.043370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.043628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.043661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.043914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.043951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.044168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.044474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.044509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.044799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.045121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.045155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.045404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.045438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.045691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.045725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.045948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.046513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.046547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.046859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.046895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.047168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.047429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.047463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.047677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.047711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.047972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.048008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.048224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.048257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.048461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.048496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.048752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.048786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.049078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.049113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.049350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.049602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.049636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.049866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.049902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.050070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.050278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.050311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.050637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.050673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.050930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.050966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.051220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.204 [2024-12-15 13:16:07.051255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.204 qpair failed and we were unable to recover it. 00:35:59.204 [2024-12-15 13:16:07.051500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.051533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.051813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.051862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.052129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.052164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.052359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.052391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.052672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.052706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.052924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.052961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.053240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.053274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.053478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.053513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.053769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.053804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.054000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.054035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.054222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.054262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.054540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.054574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.054865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.054902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.055136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.055171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.055372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.055407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.055518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.055551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.055861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.055896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.056099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.056132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.056317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.056351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.056551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.056585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.056790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.056836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.057146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.057180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.057366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.057400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.057651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.057685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.057973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.058010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.058207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.058241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.058574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.058609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.058910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.058945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.059244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.059277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.059436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.059471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.059601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.059635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.059897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.059933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.060130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.060164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.060416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.060451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.060670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.060704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.060980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.061017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.061246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.061280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.061591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.061626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.205 [2024-12-15 13:16:07.061881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.205 [2024-12-15 13:16:07.061918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.205 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.062225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.062259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.062465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.062498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.062806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.062861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.063076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.063111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.063366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.063400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.063678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.063713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.063854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.063891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.064026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.064060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.064244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.064277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.206 [2024-12-15 13:16:07.064474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.206 [2024-12-15 13:16:07.064507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.206 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.065012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.065056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.065230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.065264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.065419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.065454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.065636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.065855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.065892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.066110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.066143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.066272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.066304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.066458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.066492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.066730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.066767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.067061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.067098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.067242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.067531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.067566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.067822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.067867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.068140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.068176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.068319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.068353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.068607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.068641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.068899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.068935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.069138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.069172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.069356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.069391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.069668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.069702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.069916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.069952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.070210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.070244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.070438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.070472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.070758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.070792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.485 qpair failed and we were unable to recover it. 00:35:59.485 [2024-12-15 13:16:07.070996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.485 [2024-12-15 13:16:07.071032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.071335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.071369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.071583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.071616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.071893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.071930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.072208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.072242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.072549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.072583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.072778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.072812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.073053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.073087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.073272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.073306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.073532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.073567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.073773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.073809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.074019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.074054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.074311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.074345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.074603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.074639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.074899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.074935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.075072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.075309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.075350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.075622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.075657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.075881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.075917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.076195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.076230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.076512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.076546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.076840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.076876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.077109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.077338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.077372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.077699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.077733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.077989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.078025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.078257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.078292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.078474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.078508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.078784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.078819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.079085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.079120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.079368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.079402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.079546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.079580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.079766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.079802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.080047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.080278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.080312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.080575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.080609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.080800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.080848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.080982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.081015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.081292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.486 [2024-12-15 13:16:07.081327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.486 qpair failed and we were unable to recover it. 00:35:59.486 [2024-12-15 13:16:07.081589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.081623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.081842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.081880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.082051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.082227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.082572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.082610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.082748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.082781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.082996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.083032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.083334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.083369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.083646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.083896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.084140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.084176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.084480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.084513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.084711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.084746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.084860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.085151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.085185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.085479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.085514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.085801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.085846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.085999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.086041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.086341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.086376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.086497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.086532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.086739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.086773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.087050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.087086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.087293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.087328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.087464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.087500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.087706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.087740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.088047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.088384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.088418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.088565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.088600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.088754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.088789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.088936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.088970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.089083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.089116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.089260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.089295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.089527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.089718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.089751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.089891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.089926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.090200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.090238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.487 [2024-12-15 13:16:07.090437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.487 [2024-12-15 13:16:07.090472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.487 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.090592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.090627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.090750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.090783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.091067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.091104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.091218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.091252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.091367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.091403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.091599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.091636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.091852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.091889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.092105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.092350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.092384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.092506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.092540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.092730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.092764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.092906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.092943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.093142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.093176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.093381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.093416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.093554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.093588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.093776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.093810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.094063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.094225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.094450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.094616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.094779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.094979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.095018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.095205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.095239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.095519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.095555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.095693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.095729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.095878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.095913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.096036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.096072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.096305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.096488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.096711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.096746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.096888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.096923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.097125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.097162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.097444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.097478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.097674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.097709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.097921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.097958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.098217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.098252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.098385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.098422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.098547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.098582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.098698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.488 [2024-12-15 13:16:07.098732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.488 qpair failed and we were unable to recover it. 00:35:59.488 [2024-12-15 13:16:07.098964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.099002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.099267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.099498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.099533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.099796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.099846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.100093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.100127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.100245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.100279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.100400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.100435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.100653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.100698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.100886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.100923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.101037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.101070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.101287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.101322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.101539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.101575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.101699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.101734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.101971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.102121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.102154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.102428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.102464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.102663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.102896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.102931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.103050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.103088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.103349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.103383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.103521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.103554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.103683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.103725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.103944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.103979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.104277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.104636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.104669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.104893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.104931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.105116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.105150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.105409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.105443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.105621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.105755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.105790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.105984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.106020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.106135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.106169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.489 qpair failed and we were unable to recover it. 00:35:59.489 [2024-12-15 13:16:07.106293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.489 [2024-12-15 13:16:07.106331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.106523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.106557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.106679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.106712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.106872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.106907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.107926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.107961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.108168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.108201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.108424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.108460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.108627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.108749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.108782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.108999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.109035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.109241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.109277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.109464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.109497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.109748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.109781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.109999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.110035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.110154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.110187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.110391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.110425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.110617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.110876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.110912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.111046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.111080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.111280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.111314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.111500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.111535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.111718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.111751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.111961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.111997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.112113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.112153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.112275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.112308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.112499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.112533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.112785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.112819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.113183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.113411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.113567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.113719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.490 [2024-12-15 13:16:07.113916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.490 [2024-12-15 13:16:07.113951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.490 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.114208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.114242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.114496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.114529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.114794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.114857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.115082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.115271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.115306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.115427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.115461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.115717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.115750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.116078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.116116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.116340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.116373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.116507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.116542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.116679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.116713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.116940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.116975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.117196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.117230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.117504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.117732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.117930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.117964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.118171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.118204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.118339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.118372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.118525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.118751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.118786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.119008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.119179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.119212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.119464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.119497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.119640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.119672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.119791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.119838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.120044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.120078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.120326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.120359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.120520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.120705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.120737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.120884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.120921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.121071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.121287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.121323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.121444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.121753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.121787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.121983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.122019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.122153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.122186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.122332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.122366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.122553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.122586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.122773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.491 [2024-12-15 13:16:07.122806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.491 qpair failed and we were unable to recover it. 00:35:59.491 [2024-12-15 13:16:07.123106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.123141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.123286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.123318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.123509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.123543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.123663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.123698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.123895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.123930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.124186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.124221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.124345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.124378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.124492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.124525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.124646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.124680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.124795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.124841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.125043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.125078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.125256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.125290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.125496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.125529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.125651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.125684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.125876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.125910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.126064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.126098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.126225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.126257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.126457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.126488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.126703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.126741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.126877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.126912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.127129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.127288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.127430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.127657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.127799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.127989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.128153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.128186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.128439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.128473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.128723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.128757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.128940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.128975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.129178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.129213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.129342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.129376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.129628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.129662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.129859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.129895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.130017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.130047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.130224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.130260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.492 [2024-12-15 13:16:07.130518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.492 [2024-12-15 13:16:07.130553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.492 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.130750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.130783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.130939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.130973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.131092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.131126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.131593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.131626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.131736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.131766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.131929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.131965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.132255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.132479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.132512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.132717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.132751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.132974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.133010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.133281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.133314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.133513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.133548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.133674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.133707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.133898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.134150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.134184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.134470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.134750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.134784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.135062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.135097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.135287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.135322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.135592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.135625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.135900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.135941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.136191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.136224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.136408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.136443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.136573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.136605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.136909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.137035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.137071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.137255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.137289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.137439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.137472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.137773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.137807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.138096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.138131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.138297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.138337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.138526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.138560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.138814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.138863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.138985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.139022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.139215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.493 [2024-12-15 13:16:07.139249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.493 qpair failed and we were unable to recover it. 00:35:59.493 [2024-12-15 13:16:07.139548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.139583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.139853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.139890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.140088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.140124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.140325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.140359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.140548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.140583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.140693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.140936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.140972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.141174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.141210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.141416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.141451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.141643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.141676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.141881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.141916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.142105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.142139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.142395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.142430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.142645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.142839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.142875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.143149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.143184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.143368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.143402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.143526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.143576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.143872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.144064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.144097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.144253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.144287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.144480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.144516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.144724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.144759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.144971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.145007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.145215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.145249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.145519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.145559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.145749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.145782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.146014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.146050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.146232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.146265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.146497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.146531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.146667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.146702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.146971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.147007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.147134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.147167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.147353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.147386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.494 qpair failed and we were unable to recover it. 00:35:59.494 [2024-12-15 13:16:07.147580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.494 [2024-12-15 13:16:07.147615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.147757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.147791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.148036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.148071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.148237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.148387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.148647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.148863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.148899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.149015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.149320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.149354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.149558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.149592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.149771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.149805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.150074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.150109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.150331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.150364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.150683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.150717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.150901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.150936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.151130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.151163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.151365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.151399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.151672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.151706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.151941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.151977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.152115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.152149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.152404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.152437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.152620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.152654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.152858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.152893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.153147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.153180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.153461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.153652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.153686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.153930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.154243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.154492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.154751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.154785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.155002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.155039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.155300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.155341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.155588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.155621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.155899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.155935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.156191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.156225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.156421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.156454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.156729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.156763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.157056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.495 [2024-12-15 13:16:07.157091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.495 qpair failed and we were unable to recover it. 00:35:59.495 [2024-12-15 13:16:07.157308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.157342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.157610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.157644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.157877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.157912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.158117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.158149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.158338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.158372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.158644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.158677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.158891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.158926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.159114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.159148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.159377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.159601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.159866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.159904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.160156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.160335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.160368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.160578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.160613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.160867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.161071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.161106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.161299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.161333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.161609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.161644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.161849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.161884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.162114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.162148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.162453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.162488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.162770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.162804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.163073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.163109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.163399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.163434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.163724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.163759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.163966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.164002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.164187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.164222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.164528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.164803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.164846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.165131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.165165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.165435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.165469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.165736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.165770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.165996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.166293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.166334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.166573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.166842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.496 [2024-12-15 13:16:07.166878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.496 qpair failed and we were unable to recover it. 00:35:59.496 [2024-12-15 13:16:07.167152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.167186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.167436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.167471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.167776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.167810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.168099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.168134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.168360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.168395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.168678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.168714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.168915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.168951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.169137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.169171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.169387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.169422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.169630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.169664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.169883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.169920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.170183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.170217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.170536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.170570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.170764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.170798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.171088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.171120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.171302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.171331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.171548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.171577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.171835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.171866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.172127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.172370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.172400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.172664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.172694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.172949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.172980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.173184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.173214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.173466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.173495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.173753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.173784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.174091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.174123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.174431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.174464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.174658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.174689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.174839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.174873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.175200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.175232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.175362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.175393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.175535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.175566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.175847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.175881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.176044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.176076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.176303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.176334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.176613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.176645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.176868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.176908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.177191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.177230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.497 [2024-12-15 13:16:07.177440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.497 [2024-12-15 13:16:07.177475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.497 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.177701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.177736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.177863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.177899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.178208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.178242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.178458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.178492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.178702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.178736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.178932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.178969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.179157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.179192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.179373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.179407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.179689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.179723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.179991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.180299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.180334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.180616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.180865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.180902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.181185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.181219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.181438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.181473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.181760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.181794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.182004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.182038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.182297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.182333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.182545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.182580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.182852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.182890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.183150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.183184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.183330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.183365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.183570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.183605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.183816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.183862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.184120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.184155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.184471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.184505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.184645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.184683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.184874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.184909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.185219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.185255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.185536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.185571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.185856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.186167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.186201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.186400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.186434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.186563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.186597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.186779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.186812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.186968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.498 [2024-12-15 13:16:07.187004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.498 qpair failed and we were unable to recover it. 00:35:59.498 [2024-12-15 13:16:07.187209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.187242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.187447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.187481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.187703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.187745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.188029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.188064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.188335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.188369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.188630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.188665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.188867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.188903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.189203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.189238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.189450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.189485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.189626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.189661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.189895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.190214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.190248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.190484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.190518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.190726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.190759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.190963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.191000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.191274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.191308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.191454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.191489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.191767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.191802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.192071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.192287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.192323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.192625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.192659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.192923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.192959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.193147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.193181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.193457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.193490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.193738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.193772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.194081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.194116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.194393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.194426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.194704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.194738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.195022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.195058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.195337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.195373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.195579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.195612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.195814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.195860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.196166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.499 [2024-12-15 13:16:07.196200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.499 qpair failed and we were unable to recover it. 00:35:59.499 [2024-12-15 13:16:07.196390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.196424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.196675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.196708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.196898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.196935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.197156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.197459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.197494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.197685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.197720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.197913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.197949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.198151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.198184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.198400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.198618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.198657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.198877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.198912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.199131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.199165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.199411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.199444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.199707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.199741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.199950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.199985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.200242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.200275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.200417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.200452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.200589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.200623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.200875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.200912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.201103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.201137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.201392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.201426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.201627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.201662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.201863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.201898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.202090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.202124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.202403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.202437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.202688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.202722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.202969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.203005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.203285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.203318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.203529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.203564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.203807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.204091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.204126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.204387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.204600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.204634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.204860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.204900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.205038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.205382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.205651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.500 [2024-12-15 13:16:07.205685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.500 qpair failed and we were unable to recover it. 00:35:59.500 [2024-12-15 13:16:07.205913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.205949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.206203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.206238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.206474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.206508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.206637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.206672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.206948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.206982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.207265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.207299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.207519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.207553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.207780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.208013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.208047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.208253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.208286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.208488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.208522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.208713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.208747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.209027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.209069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.209343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.209376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.209685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.209720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.209976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.210011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.210238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.210272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.210536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.210569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.210789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.210848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.211032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.211065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.211245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.211278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.211540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.211574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.211792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.211837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.212094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.212128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.212421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.212456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.212726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.212759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.213052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.213087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.213303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.213337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.213541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.213776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.213810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.214046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.214081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.214287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.214321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.214619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.214652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.214918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.214955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.215239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.215273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.501 [2024-12-15 13:16:07.215525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.501 qpair failed and we were unable to recover it. 00:35:59.501 [2024-12-15 13:16:07.215808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.215852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.216123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.216158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.216382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.216416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.216698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.217014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.217049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.217196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.217231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.217525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.217559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.217836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.217872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.218158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.218193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.218475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.218716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.218749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.219026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.219063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.219272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.219306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.219512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.219546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.219750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.219784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.220096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.220132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.220407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.220448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.220656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.220689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.220903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.220940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.221147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.221181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.221480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.221515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.221780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.221815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.222110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.222145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.222343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.222613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.222647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.222953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.222990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.223279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.223313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.223585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.223620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.223803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.223848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.224113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.224147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.224320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.224511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.224545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.224819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.224864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.225120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.225154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.225450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.225483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.225771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.225805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.226033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.226068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.502 [2024-12-15 13:16:07.226432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.502 [2024-12-15 13:16:07.226467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.502 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.226598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.226631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.226883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.227214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.227248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.227471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.227787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.227823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.228105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.228140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.228421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.228455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.228719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.228752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.228969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.229006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.229281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.229315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.229599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.229633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.229972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.230214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.230247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.230572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.230606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.230897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.230933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.235221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.235257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.235548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.235592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.235865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.235903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.236048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.236082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.236323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.236357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.236655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.236689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.236953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.237272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.237305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.237611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.237645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.237875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.237910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.238141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.238340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.238373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.238657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.238914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.238949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.239191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.239225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.239443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.239477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.503 qpair failed and we were unable to recover it. 00:35:59.503 [2024-12-15 13:16:07.239674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.503 [2024-12-15 13:16:07.239708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.239988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.240030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.240228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.240418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.240445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.240760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.241075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.241103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.241279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.241305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.241510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.241537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.241707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.241733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.241920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.241948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.242140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.242165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.242444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.242472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.242591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.242618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.242898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.242928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.243160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.243405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.243433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.243639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.243665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.243880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.244120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.244147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.244860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.244900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.245215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.245385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.245416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.245926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.245955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.248316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.248368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.248678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.248726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.248963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.249004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.249226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.249262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.249449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.249961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.250207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.250234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.250411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.250435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.250610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.250634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.250815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.250861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.251095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.251128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.251416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.251449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.251651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.251676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.504 [2024-12-15 13:16:07.251939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.504 [2024-12-15 13:16:07.251982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.504 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.252187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.252221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.252498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.252533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.252745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.252779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.253072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.253108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.253300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.253519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.253543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.253728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.253752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.253965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.253991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.254178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.254212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.254467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.254500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.254646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.254680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.254867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.254903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.255167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.255201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.255411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.255446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.255714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.255756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.255986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.256011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.256271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.256304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.256551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.256585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.256787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.256839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.257037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.257061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.257258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.257292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.257554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.257578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.257699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.257720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.258015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.258041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.258223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.258258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.258457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.258752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.258786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.259072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.259108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.259352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.259384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.259516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.259549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.259751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.259786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.260047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.260075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.260364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.260398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.260661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.505 [2024-12-15 13:16:07.260695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.505 qpair failed and we were unable to recover it. 00:35:59.505 [2024-12-15 13:16:07.260976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.261011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.261290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.261326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.261519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.261554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.261843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.261879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.262148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.262183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.262468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.262501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.262804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.262851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.263148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.263183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.263417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.263674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.263707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.264052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.264244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.264278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.264463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.264497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.264699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.264732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.265008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.265043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.265251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.265286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.265469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.265502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.265767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.265801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.265950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.265986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.266269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.266579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.266612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.266811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.266854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.267092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.267126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.267311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.267344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.267550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.267585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.267780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.267814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.268143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.268179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.268404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.268437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.268631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.268665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.268855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.268890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.269099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.269132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.269408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.269442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.269656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.269690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.269918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.270143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.270178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.270312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.270345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.270601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.270634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.270908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.271231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.506 [2024-12-15 13:16:07.271432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.506 [2024-12-15 13:16:07.271466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.506 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.271742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.271777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.272062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.272097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.272377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.272411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.272691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.272725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.272939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.272976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.273185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.273219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.273529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.273564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.273847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.273883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.274212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.274447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.274481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.274734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.274768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.275078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.275114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.275246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.275280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.275580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.275857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.275894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.276177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.276211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.276491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.276525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.276805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.276854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.276985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.277019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.277295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.277336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.277620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.277654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.277876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.277911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.278130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.278164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.278419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.278452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.278690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.278725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.279035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.279071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.279276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.279310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.279588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.279622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.279771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.279804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.280060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.280311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.280345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.280614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.280648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.280856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.280892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.281099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.281135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.281414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.281450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.281589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.281622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.281844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.282090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.507 [2024-12-15 13:16:07.282125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.507 qpair failed and we were unable to recover it. 00:35:59.507 [2024-12-15 13:16:07.282399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.282433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.282610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.282795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.283019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.283054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.283338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.283371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.283521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.283554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.283766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.283996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.284030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.284231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.284264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.284447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.284481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.284631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.284666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.284887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.284923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.285110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.285144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.285342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.285376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.285536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.285660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.285691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.285966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.286003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.286280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.286313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.286426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.286460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.286727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.286760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.286955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.286990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.287193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.287231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.287368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.287401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.287685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.287895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.287929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.288189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.288223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.288481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.288514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.288697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.288881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.288915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.289114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.289147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.289274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.289308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.289497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.508 [2024-12-15 13:16:07.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.508 qpair failed and we were unable to recover it. 00:35:59.508 [2024-12-15 13:16:07.289643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.289676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.289813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.289858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.290140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.290176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.290319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.290353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.290607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.290642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.290846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.291073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.291107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.291298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.291331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.291475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.291637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.291670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.291990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.292025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.292302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.292337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.292590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.292624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.292850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.292886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.293086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.293120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.293382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.293417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.293616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.293649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.293951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.293986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.294234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.294269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.294471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.294505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.294700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.294925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.294960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.295230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.295263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.295394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.295428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.295731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.295963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.295997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.296383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.296416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.296667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.296702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.296840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.296885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.297143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.297380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.297414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.297609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.297642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.297865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.297899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.298179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.298213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.298439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.298473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.298676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.298709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.298977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.299011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.299203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.509 [2024-12-15 13:16:07.299237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.509 qpair failed and we were unable to recover it. 00:35:59.509 [2024-12-15 13:16:07.299452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.299486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.299680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.299714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.300244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.300278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.300489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.300523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.300720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.300755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.300940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.300977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.301269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.301388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.301423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.301611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.301645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.301848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.301883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.302140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.302174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.302380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.302414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.302651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.302684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.302945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.302982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.303130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.303164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.303340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.303542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.303577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.303783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.303819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.304072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.304114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.304311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.304345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.304616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.304651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.304797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.304841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.304981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.305015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.305206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.305240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.305492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.305525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.305727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.305760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.305956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.305991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.306136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.306169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.306295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.306329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.306530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.306570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.306709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.306743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.307018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.307054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.307270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.307303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.307553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.307586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.307723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.307756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.307939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.307975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.308203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.308237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.308518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.510 [2024-12-15 13:16:07.308552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.510 qpair failed and we were unable to recover it. 00:35:59.510 [2024-12-15 13:16:07.308691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.308724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.308912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.308947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.309162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.309196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.309380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.309412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.309533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.309567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.309822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.309871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.310099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.310133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.310300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.310445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.310477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.310668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.310702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.310988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.311124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.311158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.311378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.311411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.311545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.311578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.311767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.311800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.311922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.311956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.312063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.312096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.312323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.312355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.312632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.312941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.313127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.313161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.313351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.313383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.313633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.313667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.313860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.313894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.314088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.314120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.314364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.314396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.314665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.314699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.314947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.314983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.315110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.315143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.315350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.315584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.315616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.315865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.315906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.316051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.316084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.316281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.316314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.316491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.316524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.316707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.316739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.316927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.316959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.511 [2024-12-15 13:16:07.317149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.511 [2024-12-15 13:16:07.317178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.511 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.317302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.317331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.317465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.317602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.317630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.317752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.317977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.318007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.318182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.318356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.318386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.318523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.318553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.318739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.318768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.318975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.319192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.319223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.319419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.319447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.319581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.319611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.319864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.319897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.320084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.320115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.320311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.320340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.320519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.320550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.320762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.320794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.321213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.321378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.321682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.321976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.322144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.322322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.322487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.322721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.322933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.322967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.323221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.323252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.323444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.323477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.323678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.323710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.323901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.512 [2024-12-15 13:16:07.323934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.512 qpair failed and we were unable to recover it. 00:35:59.512 [2024-12-15 13:16:07.324166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.324205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.324420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.324452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.324690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.324722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.324842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.324875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.325003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.325037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.325284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.325317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.325516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.325550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.325681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.325715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.325920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.325956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.326097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.326130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.326251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.326284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.326422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.326454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.326661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.326694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.326883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.326917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.327106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.327140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.327255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.327287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.327420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.327452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.327666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.327699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.327893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.328118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.328153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.328333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.328366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.328650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.328773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.328806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.329008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.329042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.329226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.329258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.329442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.329475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.329658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.329692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.329940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2207c70 is same with the state(6) to be set 00:35:59.513 [2024-12-15 13:16:07.330160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.330250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.330424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.330462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.330736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.330773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.330976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.331125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.331342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.331512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.331678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.331938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.331974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.332185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.332218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.332341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.332377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.513 [2024-12-15 13:16:07.332499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.513 [2024-12-15 13:16:07.332532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.513 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.332664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.332699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.332847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.332883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.333131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.333165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.333378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.333702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.333735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.333917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.333966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.334076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.334110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.334255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.334289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.334524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.334793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.334835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.335019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.335251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.335285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.335408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.335441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.335683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.335717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.335848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.335890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.336020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.336055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.336239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.336273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.336460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.336493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.336690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.336723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.336869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.336905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.337261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.337293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.337436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.337469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.337659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.337693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.337959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.337993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.338286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.338322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.338446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.338480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.338687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.338720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.338932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.338969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.339096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.339130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.339330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.339363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.339502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.339535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.339730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.339764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.339904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.339939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.340192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.340226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.340353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.340387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.514 [2024-12-15 13:16:07.340577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.514 [2024-12-15 13:16:07.340610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.514 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.340801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.340847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.341065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.341098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.341224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.341256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.341452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.341486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.341637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.341670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.341875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.341911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.342043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.342077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.342207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.342240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.342430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.342464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.342648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.342683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.342956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.342991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.343235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.343270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.343532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.343567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.343791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.343837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.343960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.343994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.344103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.344137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.344268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.344302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.344575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.344614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.344838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.344874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.345146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.345181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.345504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.345537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.345805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.346115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.346150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.346278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.346311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.346490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.346523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.346716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.346750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.346998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.347033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.347220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.347254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.347440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.347632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.347746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.347781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.347994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.348030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.348333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.348368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.348563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.348597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.348816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.349074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.349241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.349275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.349381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.349415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.515 qpair failed and we were unable to recover it. 00:35:59.515 [2024-12-15 13:16:07.349625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.515 [2024-12-15 13:16:07.349658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.349923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.349959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.350222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.350256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.350483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.350516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.350697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.350730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.351001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.351035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.351311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.351346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.351481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.351516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.351628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.351661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.351919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.351955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.352082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.352117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.352361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.352395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.352554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.352799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.352843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.352973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.353807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.353995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.354030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.354235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.354269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.354547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.354581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.354802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.354845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.354981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.355016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.355147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.355180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.355300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.355336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.355590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.355666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.355938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.355978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.356231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.356268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.356392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.356426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.356586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.356849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.356885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.357096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.357131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.357314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.357347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.357541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.357574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.357689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.357721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.357932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.357969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.358163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.358197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.516 [2024-12-15 13:16:07.358332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.516 [2024-12-15 13:16:07.358364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.516 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.358640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.358674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.358877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.358910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.359058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.359091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.359334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.359368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.359629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.359662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.359849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.359884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.360074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.360108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.360297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.360331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.360511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.360545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.360736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.360769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.360955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.360989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.361258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.361292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.361419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.361452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.361581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.361614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.361742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.361776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.362066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.362100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.362211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.362244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.362508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.362542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.362666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.362700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.362837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.362877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.363124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.363158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.363278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.363311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.363518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.363553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.363798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.363847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.363987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.364020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.364214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.364248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.364531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.364566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.364683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.364716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.364863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.365029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.365062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.365349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.365383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.365564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.365597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.365855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.365891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.366148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.366183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.517 qpair failed and we were unable to recover it. 00:35:59.517 [2024-12-15 13:16:07.366310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.517 [2024-12-15 13:16:07.366344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.366587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.366790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.366823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.366956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.366988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.367141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.367174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.367442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.367477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.367620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.367652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.367842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.367877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.368143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.368177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.368363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.368396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.368506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.368538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.368676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.368709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.368845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.368879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.369076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.369111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.369291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.369325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.369547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.369582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.369779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.369813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.370129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.370163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.370300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.370334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.370545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.370578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.370849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.370884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.371162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.371196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.371438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.371471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.518 [2024-12-15 13:16:07.371715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.518 [2024-12-15 13:16:07.371749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.518 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.371938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.371981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.372119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.372152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.372314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.372449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.372482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.372696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.372729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.372918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.372953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.373092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.373125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.373368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.373402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.373522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.800 [2024-12-15 13:16:07.373743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.800 [2024-12-15 13:16:07.373776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.800 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.373916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.373952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.374157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.374190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.374330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.374363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.374548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.374582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.374782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.375039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.375074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.375275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.375307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.375492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.375525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.375716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.375750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.375892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.375927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.376062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.376096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.376210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.376432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.376466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.376596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.376629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.376835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.376870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.377044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.377308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.377525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.377559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.377710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.377843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.377877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.378079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.378112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.378385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.378625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.378657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.378838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.378873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.379071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.379105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.379501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.379537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.379787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.379820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.380039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.380073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.380252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.380286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.380412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.380444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.380626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.380664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.380780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.380814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.381036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.381242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.381386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.381540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.381753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.381971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.801 [2024-12-15 13:16:07.382006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.801 qpair failed and we were unable to recover it. 00:35:59.801 [2024-12-15 13:16:07.382189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.382223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.382403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.382436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.382626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.382660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.382786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.382820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.383026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.383060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.383263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.383298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.383414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.383448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.383724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.383757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.384002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.384037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.384276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.384309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.384441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.384474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.384660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.384693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.384814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.384860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.385109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.385142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.385356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.385389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.385564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.385597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.385845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.385879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.386064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.386097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.386233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.386264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.386512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.386547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.386753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.386785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.386978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.387012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.387217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.387251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.387480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.387514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.387776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.387810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.388064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.388098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.388285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.388318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.388506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.388539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.388781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.388814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.389071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.389105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.389219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.389251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.389422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.389455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.389635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.389673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.389853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.389888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.390101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.390282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.390316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.390431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.390464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.390728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.390761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.390998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.391032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.802 [2024-12-15 13:16:07.391330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.802 [2024-12-15 13:16:07.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.802 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.391484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.391517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.391629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.391662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.391902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.392090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.392123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.392259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.392291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.392521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.392770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.392804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.392993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.393026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.393207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.393424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.393457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.393708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.393742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.393928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.393964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.394497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.394660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.394883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.394990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.395023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.395272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.395306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.395450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.395484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.395737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.395769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.396056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.396293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.396452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.396619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.396779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.396994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.397028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.397213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.397246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.397365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.397398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.397579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.397612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.397740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.397773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.397968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.398004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.398179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.398217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.398368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.398401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.398654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.398688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.398881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.398916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.399189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.399315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.399348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.399523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.803 [2024-12-15 13:16:07.399556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.803 qpair failed and we were unable to recover it. 00:35:59.803 [2024-12-15 13:16:07.399728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.399761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.399933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.399967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.400157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.400190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.400338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.400372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.400596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.400628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.400813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.400857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.401036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.401069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.401271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.401305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.401489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.401521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.401755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.401789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.401987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.402021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.402199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.402230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.402501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.402535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.402656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.402689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.402955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.402991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.403195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.403227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.403418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.403451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.403629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.403662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.403896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.403931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.404172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.404206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.404341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.404376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.404569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.404602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.404844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.404879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.405069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.405102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.405291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.405324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.405575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.405625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.405894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.405929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.406112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.406145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.406342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.406374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.406611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.406644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.406751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.406782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.407076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.407112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.407288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.407323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.407532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.407570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.407762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.407795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.407956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.407990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.408200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.408233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.408355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.408387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.408506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.408537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.804 qpair failed and we were unable to recover it. 00:35:59.804 [2024-12-15 13:16:07.408708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.804 [2024-12-15 13:16:07.408741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.408877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.408913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.409021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.409231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.409264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.409551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.409586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.409850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.409884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.410123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.410156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.410339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.410373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.410624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.410658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.410789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.410822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.411058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.411231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.411264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.411444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.411478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.411594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.411625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.411808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.411850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.412036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.412070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.412219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.412253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.412525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.412729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.412761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.412943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.412976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.413275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.413527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.413620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.413854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.413894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.414081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.414114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.414229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.414262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.414463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.414497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.414684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.414716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.414911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.414948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.415189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.415221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.415383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.415649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.415682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.415813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.415876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.416064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.416096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.416296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.805 [2024-12-15 13:16:07.416328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.805 qpair failed and we were unable to recover it. 00:35:59.805 [2024-12-15 13:16:07.416520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.416561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.416735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.416768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.416955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.416990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.417187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.417220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.417468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.417502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.417743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.417775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.417980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.418014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.418278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.418311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.418527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.418719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.418752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.418867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.418902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.419018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.419234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.419267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.419382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.419414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.419663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.419696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.419893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.419928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.420170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.420326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.420359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.420479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.420511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.420753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.420785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.420914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.420949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.421073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.421107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.421363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.421394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.421500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.421531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.421658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.421690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.421872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.421906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.422038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.422071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.422310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.422388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.422589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.422626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.422872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.422910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.423154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.423188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.423302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.423335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.423443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.423476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.423704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.423887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.423923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.424121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.424154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.424323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.424355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.424560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.424593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.424855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.806 [2024-12-15 13:16:07.424891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.806 qpair failed and we were unable to recover it. 00:35:59.806 [2024-12-15 13:16:07.425071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.425104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.425207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.425240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.425451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.425485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.425677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.425708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.425977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.426013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.426191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.426224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.426403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.426435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.426636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.426669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.426920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.426956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.427129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.427162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.427294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.427327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.427464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.427497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.427667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.427701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.427915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.427951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.428144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.428176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.428289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.428328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.428532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.428564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.428673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.428706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.428901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.428938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.429180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.429212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.429365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.429396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.429521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.429553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.429762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.429949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.429984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.430111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.430144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.430379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.430412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.430689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.430932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.430967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.431210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.431244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.431440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.431473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.431577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.431611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.431883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.431919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.432116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.432149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.432276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.432308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.432570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.432602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.432795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.432842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.433037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.433069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.433242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.433274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.433511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.433730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.807 [2024-12-15 13:16:07.433763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.807 qpair failed and we were unable to recover it. 00:35:59.807 [2024-12-15 13:16:07.433954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.433989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.434184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.434217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.434396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.434434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.434614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.434646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.434775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.434808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.434981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.435091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.435302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.435335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.435510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.435543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.435745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.435866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.435902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.436126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.436381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.436414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.436585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.436618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.436792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.436833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.436938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.436971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.437106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.437349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.437381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.437516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.437548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.437735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.437768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.437896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.437931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.438180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.438382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.438415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.438532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.438565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.438813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.438856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.439030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.439064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.439330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.439362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.439552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.439766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.439799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.440029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.440069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.440186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.440219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.440480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.440514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.440724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.440963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.440998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.441183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.441216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.441422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.441454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.441629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.441661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.441844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.441878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.442064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.442097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.442223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.808 [2024-12-15 13:16:07.442256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.808 qpair failed and we were unable to recover it. 00:35:59.808 [2024-12-15 13:16:07.442450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.442482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.442691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.442724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.442856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.442891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.443138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.443172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.443417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.443450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.443564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.443597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.443784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.443816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.444019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.444053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.444192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.444226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.444418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.444450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.444712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.444745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.444872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.444908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.445015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.445172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.445205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.445401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.445433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.445617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.445650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.445763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.445796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.446006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.446041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.446221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.446253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.446444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.446477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.446873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.446907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.447042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.447074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.447277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.447309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.447602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.447727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.447760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.447899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.447933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.448127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.448159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.448335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.448369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.448579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.448611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.448786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.448837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.448960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.448992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.449107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.449140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.449272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.449306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.449567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.449600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.809 qpair failed and we were unable to recover it. 00:35:59.809 [2024-12-15 13:16:07.449723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.809 [2024-12-15 13:16:07.449755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.449925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.449960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.450134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.450166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.450284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.450317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.450531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.450564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.450837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.450871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.451110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.451143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.451328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.451499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.451533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.451666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.451699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.451967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.452001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.452298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.452423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.452455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.452584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.452617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.452803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.452845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.453098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.453132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.453302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.453334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.453482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.453516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.453749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.453781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.453900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.453933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.454169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.454203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.454415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.454448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.454572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.454610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.454737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.454772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.455031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.455329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.455362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.455479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.455512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.455687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.455720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.455877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.455912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.456915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.456950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.457187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.457220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.457361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.457394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.457651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.457683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.457863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.457897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.458084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.810 [2024-12-15 13:16:07.458117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.810 qpair failed and we were unable to recover it. 00:35:59.810 [2024-12-15 13:16:07.458315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.458515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.458548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.458722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.458755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.458942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.458976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.459079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.459111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.459324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.459357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.459618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.459651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.459841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.459876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.459998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.460153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.460308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.460476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.460634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.460854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.460889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.461084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.461117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.461302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.461335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.461443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.461475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.461662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.461695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.461815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.461856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.462934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.463080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.463113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.463219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.463251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.463364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.463396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.463592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.463626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.463899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.463934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.464124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.464156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.464326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.464595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.464627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.464800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.464843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.465019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.465052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.465311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.465344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.465533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.465570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.465686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.465720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.465840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.465874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.466134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.811 [2024-12-15 13:16:07.466167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.811 qpair failed and we were unable to recover it. 00:35:59.811 [2024-12-15 13:16:07.466353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.466385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.466632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.466664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.466845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.466879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.467085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.467119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.467288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.467321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.467490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.467523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.467732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.467764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.467879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.467913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.468088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.468120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.468388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.468628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.468662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.468922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.468957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.469146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.469179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.469367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.469613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.469646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.469895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.469929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.470102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.470135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.470335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.470367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.470535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.470567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.470838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.470873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.471082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.471113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.471352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.471385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.471493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.471526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.471771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.471804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.472070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.472104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.472334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.472364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.472543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.472576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.472882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.473001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.473033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.473216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.473249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.473362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.473394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.473632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.473664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.473875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.473908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.474016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.474049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.474244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.474277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.474486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.474518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.474653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.474686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.474816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.474861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.475048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.812 [2024-12-15 13:16:07.475080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.812 qpair failed and we were unable to recover it. 00:35:59.812 [2024-12-15 13:16:07.475269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.475302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.475483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.475515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.475687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.475720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.475903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.475936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.476209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.476424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.476631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.476663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.476848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.476883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.477048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.477199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.477483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.477704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.477874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.477993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.478026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.478209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.478242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.478448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.478481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.478686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.478718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.478982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.479018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.479140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.479364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.479397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.479576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.479609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.479780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.479813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.479995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.480029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.480236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.480507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.480539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.480728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.480766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.480889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.480922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.481160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.481193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.481416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.481448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.481699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.481731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.481975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.482010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.482203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.482235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.482434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.482468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.482654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.482686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.482905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.483078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.483110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.483310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.813 [2024-12-15 13:16:07.483343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.813 qpair failed and we were unable to recover it. 00:35:59.813 [2024-12-15 13:16:07.483524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.483556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.483820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.483860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.484164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.484284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.484316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.484535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.484653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.484686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.484962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.484996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.485181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.485213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.485502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.485707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.485740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.485968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.486206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.486418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.486450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.486582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.486614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.486807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.486847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.486970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.487008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.487151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.487183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.487450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.487481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.487816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.487856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.488098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.488131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.488378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.488411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.488526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.488559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.488683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.488715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.488885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.488919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.489179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.489210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.489383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.489415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.489591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.489625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.489764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.489796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.490073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.490146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.490419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.490457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.490700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.490734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.491003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.491040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.491225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.491258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.491383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.491417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.491594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.491627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.491810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.491854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.814 [2024-12-15 13:16:07.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.814 [2024-12-15 13:16:07.492069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.814 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.492239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.492272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.492460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.492493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.492705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.492737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.492916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.492951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.493196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.493239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.493424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.493456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.493649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.493683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.493946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.493981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.494195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.494227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.494364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.494397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.494572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.494605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.494798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.494845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.495062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.495094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.495221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.495253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.495363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.495395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.495577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.495611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.495784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.495817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.496036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.496070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.496247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.496281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.496481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.496715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.496749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.496876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.496910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.497083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.497116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.497304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.497338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.497517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.497550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.497665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.497699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.497936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.498084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.498377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.498411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.498671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.498704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.498903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.498939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.499084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.499119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.499294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.499327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.499591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.499624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.499865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.499899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.500142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.500174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.500355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.500389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.500519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.500552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.500839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.815 [2024-12-15 13:16:07.500874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.815 qpair failed and we were unable to recover it. 00:35:59.815 [2024-12-15 13:16:07.501145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.501179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.501296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.501328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.501570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.501603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.501782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.501815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.502070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.502103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.502242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.502281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.502466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.502500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.502779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.502911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.502945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.503153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.503187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.503373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.503406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.503677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.503711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.503899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.504117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.504150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.504302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.504474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.504507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.504689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.504723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.504862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.504896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.505163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.505196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.505392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.505426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.505557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.505591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.505778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.505812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.506069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.506102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.506341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.506374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.506587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.506795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.506851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.507123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.507155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.507339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.507371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.507631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.507664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.507847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.507882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.508843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.508876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.509090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.509123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.509249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.509280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.509410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.816 [2024-12-15 13:16:07.509442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.816 qpair failed and we were unable to recover it. 00:35:59.816 [2024-12-15 13:16:07.509626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.509659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.509883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.509918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.510111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.510144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.510326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.510359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.510546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.510752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.510785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.510928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.510968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.511147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.511179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.511361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.511394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.511575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.511608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.511747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.511778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.511913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.511948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.512051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.512082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.512332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.512364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.512482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.512514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.512751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.512785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.513071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.513106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.513352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.513385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.513503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.513539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.513749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.513972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.514372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.514588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.514811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.514964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.514996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.515124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.515156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.515340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.515374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.515479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.515511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.515702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.515733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.515916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.515949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.516135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.516169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.516280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.516317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.516503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.516577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.516790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.516841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.517039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.517074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.517251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.517285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.517470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.517503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.517628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.517661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.817 [2024-12-15 13:16:07.517843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.817 [2024-12-15 13:16:07.517877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.817 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.518122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.518155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.518428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.518584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.518770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.518804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.519042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.519247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.519469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.519633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.519860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.519968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.520137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.520347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.520509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.520661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.520926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.521182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.521215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.521325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.521359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.521466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.521499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.521674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.521707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.521916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.522074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.522113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.522356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.522389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.522509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.522542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.522723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.522757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.522876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.522911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.523125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.523284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.523491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.523707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.523858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.523983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.524017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.524125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.524158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.524275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.524308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.524622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.524840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.818 [2024-12-15 13:16:07.524875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.818 qpair failed and we were unable to recover it. 00:35:59.818 [2024-12-15 13:16:07.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.525093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.525279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.525313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.525557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.525590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.525777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.525810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.525930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.525965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.526164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.526198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.526326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.526358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.526601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.526636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.526746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.526780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.526913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.526949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.527152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.527185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.527368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.527400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.527667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.527707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.527886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.527923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.528049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.528082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.528259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.528292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.528479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.528687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.528720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.528904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.528938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.529116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.529148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.529326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.529360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.529622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.529656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.529844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.529985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.530018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.530159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.530363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.530396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.530533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.530567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.530811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.530851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.530986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.531821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.531970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.532003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.532120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.532154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.532420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.532454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.532652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.532685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.532810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.532855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.819 [2024-12-15 13:16:07.533111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.819 [2024-12-15 13:16:07.533146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.819 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.533397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.533429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.533612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.533646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.533767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.533799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.533931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.533965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.534139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.534172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.534341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.534375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.534576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.534791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.534845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.535034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.535068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.535241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.535274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.535456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.535489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.535608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.535640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.535915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.535949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.536195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.536230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.536350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.536383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.536502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.536536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.536668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.536701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.536881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.536916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.537106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.537139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.537244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.537277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.537552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.537790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.537833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.537957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.537991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.538208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.538322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.538357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.538536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.538570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.538842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.538878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.539064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.539097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.539224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.539257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.539437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.539662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.539695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.539876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.539915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.540041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.540074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.540259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.540292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.540485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.540518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.540695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.540727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.540910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.540944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.541161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.541193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.541325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.541358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.820 [2024-12-15 13:16:07.541532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.820 [2024-12-15 13:16:07.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.820 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.541692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.541730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.541945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.541979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.542144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.542408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.542441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.542633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.542667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.542873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.542908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.543117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.543151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.543361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.543528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.543560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.543677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.543710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.543963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.543999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.544216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.544250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.544378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.544411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.544529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.544560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.544770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.544803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.544999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.545032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.545233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.545267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.545371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.545402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.545638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.545671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.545776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.545808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.546049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.546212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.546504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.546659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.546813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.546995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.547030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.547276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.547470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.547508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.547625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.547657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.547847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.547882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.548068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.548103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.548298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.548331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.548509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.548543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.548653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.548687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.548818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.548860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.549035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.549071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.549312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.549347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.549518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.549549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.821 [2024-12-15 13:16:07.549701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.821 qpair failed and we were unable to recover it. 00:35:59.821 [2024-12-15 13:16:07.549883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.549916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.550050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.550083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.550223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.550256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.550438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.550469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.550705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.550738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.550867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.550901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.551075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.551109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.551305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.551338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.551454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.551486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.551667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.551701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.551813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.551861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.552037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.552071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.552255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.552287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.552509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.552542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.552724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.552758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.552876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.552917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.553105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.553143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.553382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.553416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.553587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.553620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.553739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.553769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.553911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.553946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.554129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.554162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.554356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.554388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.554507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.554541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.554652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.554685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.554861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.554894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.555016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.555047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.555185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.555218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.555393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.555427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.555555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.555588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.555790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.555833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.556063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.556177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.556206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.556443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.556477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.556691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.822 qpair failed and we were unable to recover it. 00:35:59.822 [2024-12-15 13:16:07.556805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.822 [2024-12-15 13:16:07.556849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.557059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.557092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.557272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.557305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.557500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.557533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.557657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.557689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.557803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.557846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.558960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.558994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.559170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.559202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.559374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.559408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.559588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.559739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.559770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.559907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.559940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.560130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.560164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.560283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.560314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.560431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.560462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.560703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.560852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.560888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.561017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.561048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.561171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.561204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.561417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.561449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.561619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.561652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.561845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.561878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.562051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.562084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.562393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.562649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.562683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.562810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.562872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.563047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.563078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.563365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.563487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.563519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.563697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.563729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.563932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.563968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.564160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.564191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.564311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.564343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.564452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.564485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.564658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.823 [2024-12-15 13:16:07.564690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.823 qpair failed and we were unable to recover it. 00:35:59.823 [2024-12-15 13:16:07.564810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.564853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.564970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.565191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.565350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.565506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.565711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.565914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.565948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.566067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.566100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.566204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.566418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.566450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.567931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.567985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.568297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.568333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.568459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.568492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.568685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.568717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.568899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.568934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.569154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.569395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.569531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.569696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.569843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.569969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.570001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.570138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.570169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.570352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.570386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.570555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.570587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.570770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.571797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.571843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.572115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.572148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.572268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.572302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.572419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.572450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.572638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.572671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.572791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.572842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.573034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.573067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.573181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.573213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.573338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.573370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.824 [2024-12-15 13:16:07.573546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.824 [2024-12-15 13:16:07.573580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.824 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.573817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.573863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.574041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.574073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.574176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.574208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.574389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.574422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.574602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.574634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.574751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.574782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.575943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.575974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.576168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.576305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.576336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.576455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.576485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.576674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.576706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.576889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.576924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.577922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.578964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.578997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.579872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.580024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.580053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.580224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.580297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.580468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.580541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.580738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.580775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.825 qpair failed and we were unable to recover it. 00:35:59.825 [2024-12-15 13:16:07.580905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.825 [2024-12-15 13:16:07.580941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.581265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.581408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.581581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.581729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.581976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.582186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.582345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.582494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.582649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.582816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.582859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.583959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.583991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.586133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.586429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.586466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.586590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.586624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.586804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.586852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.587048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.587269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.587302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.587494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.587529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.587666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.587699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.587895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.587932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.588960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.588993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.589178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.589210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.589323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.589355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.589463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.589499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.589688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.589721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.589854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.590015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.590048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.826 [2024-12-15 13:16:07.590167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.826 [2024-12-15 13:16:07.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.826 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.590327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.590359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.590482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.590516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.590638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.590670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.590925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.590959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.591131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.591164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.591358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.591391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.591570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.591604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.591789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.591823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.591953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.591987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.592169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.592202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.592382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.592422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.592656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.592810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.592856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.592974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.593210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.593356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.593569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.593953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.593987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.594101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.594136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.594400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.594433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.594613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.594646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.594849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.594884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.595863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.595899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.596132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.596167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.596371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.596547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.596580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.596761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.596794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.596958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.596995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.827 [2024-12-15 13:16:07.597173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.827 [2024-12-15 13:16:07.597206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.827 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.597395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.597427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.597661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.597696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.597874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.597948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.598122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.598191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.598387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.598424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.598670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.598705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.598889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.598924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.599109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.599143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.599336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.599369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.599545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.599578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.599699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.599731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.599928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.599962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.600129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.600248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.600280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.600390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.600424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.600618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.600651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.600789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.600822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.601902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.601937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.602145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.602178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.602299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.602334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.602658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.602692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.602870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.602906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.603951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.603985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.604114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.604147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.604321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.604355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.604470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.604503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.604681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.604716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.828 [2024-12-15 13:16:07.604894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.828 qpair failed and we were unable to recover it. 00:35:59.828 [2024-12-15 13:16:07.605076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.605109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.605379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.605412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.605596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.605630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.605748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.606845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.607864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.607973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.608155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.608524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.608688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.608862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.608897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.609945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.609980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.610284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.610507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.610644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.610803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.610996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.829 [2024-12-15 13:16:07.611741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.829 qpair failed and we were unable to recover it. 00:35:59.829 [2024-12-15 13:16:07.611914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.611950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.612186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.612335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.612542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.612693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.612866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.612986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.613021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.613139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.613172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.614551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.614606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.614875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.615088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.615255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.615467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.615633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.615848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.615976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.616008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.616133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.616166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.616360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.616393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.616582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.616872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.616908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.617921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.617957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.618134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.618167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.618272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.618304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.618551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.618582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.618681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.618712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.618841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.618872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.620116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.620164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.620364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.620396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.620577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.620608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.621821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.621884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.622132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.622165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.622412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.622445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.622563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.622599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.622851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.622884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.830 [2024-12-15 13:16:07.622996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.830 [2024-12-15 13:16:07.623027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.830 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.623216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.623247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.623417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.623448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.623573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.623603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.623754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.623868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.623907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.624075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.624106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.624375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.624405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.624522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.624553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.624670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.624701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.625894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.625941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.626142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.626173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.626343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.626372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.626549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.626581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.626769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.626802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.626991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.627026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.627222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.627257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.627403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.627431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.627612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.627639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.627873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.628957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.628987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.629162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.629190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.629304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.629333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.629493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.629520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.629627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.629655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.629777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.629805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.630001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.630030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.630205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.630234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.630350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.630382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.630491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.831 [2024-12-15 13:16:07.630524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.831 qpair failed and we were unable to recover it. 00:35:59.831 [2024-12-15 13:16:07.630646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.630680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.630819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.630862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.631046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.631081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.631189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.631224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.631428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.631456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.631617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.631646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.631860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.631890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.632076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.632105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.632266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.632294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.632461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.632489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.632694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.632732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.632921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.632955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.633140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.633173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.633371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.633405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.633638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.633667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.633772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.633801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.633985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.634054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.634200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.634237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.634429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.634463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.634635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.634668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.634884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.634921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.635131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.635272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.635305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.635424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.635458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.635656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.635689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.635811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.635857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.636000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.636034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.636228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.636441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.636476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.636659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.636694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.636838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.636872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.637929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.637964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.638096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.638130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.638237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.638270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.832 qpair failed and we were unable to recover it. 00:35:59.832 [2024-12-15 13:16:07.638387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.832 [2024-12-15 13:16:07.638421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.638612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.638647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.638820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.638866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.638977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.639197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.639231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.639429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.639462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.639631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.639664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.639841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.639877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.639998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.640156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.640302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.640518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.640728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.640949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.640982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.641113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.641147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.641385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.641622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.641654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.641784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.641818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.642870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.642991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.643024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.643146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.643180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.643366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.643400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.643606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.643642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.643875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.644846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.644881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.645126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.645159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.645409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.645443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.645563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.645595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.645870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.645906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.646027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.833 [2024-12-15 13:16:07.646061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.833 qpair failed and we were unable to recover it. 00:35:59.833 [2024-12-15 13:16:07.646258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.646291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.646472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.646505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.646613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.646646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.646772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.646805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.646952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.646988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.647173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.647205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.647346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.647380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.647503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.647537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.647756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.647788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.648893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.648926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.649848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.649882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.650957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.650990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.651110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.651143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.651280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.651312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.651571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.651604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.651719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.651751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.651938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.651973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.652200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.652337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.652493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.652638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.652800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.652985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.834 [2024-12-15 13:16:07.653018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.834 qpair failed and we were unable to recover it. 00:35:59.834 [2024-12-15 13:16:07.653127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.653160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.653263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.653296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.653438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.653470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.653711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.653743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.653921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.653955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.654131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.654164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.654410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.654443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.654614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.654648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.654763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.654796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.654988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.655135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.655496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.655705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.655865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.655900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.656042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.656194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.656395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.656553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.656770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.656983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.657018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.657131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.657164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.657404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.657437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.657648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.657682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.657867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.657902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.658113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.658331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.658473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.658767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.658986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.659222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.659395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.659542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.659805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.659965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.659998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.660271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.660303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.660432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.660465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.835 [2024-12-15 13:16:07.660572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.835 [2024-12-15 13:16:07.660605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:35:59.835 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.660916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.661128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.661164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.661365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.661400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.661566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.661750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.661785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.661973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.662008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.662128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.662414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.662448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.662624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.662658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.662778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.662812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.663114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.663147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.663283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.663317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.663495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.663528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.663728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.663967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.664004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.664192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.664227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.664339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.664371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.664622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.664656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.664923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.664959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.665081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.665115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.665325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.665357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.665542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.665574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.665774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.665807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.666010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.666045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.666232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.666474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.666508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.666697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.666731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.666868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.666904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.667146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.667179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.667304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.667338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.667444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.667476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.667664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.667699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.667939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.667974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.668219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.668252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.668370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.668403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.836 [2024-12-15 13:16:07.668598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.836 [2024-12-15 13:16:07.668632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.836 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.668819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.668868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.669058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.669092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.669226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.669258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.669386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.669420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.669689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.669722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.669965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.669999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.670128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.670161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.670337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.670372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.670622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.670654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.670856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.670891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.671086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.671121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.671249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.671282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.671523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.671556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.671677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.671711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.671896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.671932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.672063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.672096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.672276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.672311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.672540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.672578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.672761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.672795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.672978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.673012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.673256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.673290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.673398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.673432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.673620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.673653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.673838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.673873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.674045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.674077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.674290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.674323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.674527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.674562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.674724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.675003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.675131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.675164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.675272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.675305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.675504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.675537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.675749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.675967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.676003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.676142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.676176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.676426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.676459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.676574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.676607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.676784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.676817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.677117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.837 [2024-12-15 13:16:07.677151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.837 qpair failed and we were unable to recover it. 00:35:59.837 [2024-12-15 13:16:07.677344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.677378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.677652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.677894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.677931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.678106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.678140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.678255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.678288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.678468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.678503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.678622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.678655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.678803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.679128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.679161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.679349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.679382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.679514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.679547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.679724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.679757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.679931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.680152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.680185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.680383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.680417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.680611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.680645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.680764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.680797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.680990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.681166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.681206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.681379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.681413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.681582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.681615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.681738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.681771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.681896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.681931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.682103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.682137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.682262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.682296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.682488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.682522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.682646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.682680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.682784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.682816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.683008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.683041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.683253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.683287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.683548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.683582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.683759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.683792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:35:59.838 [2024-12-15 13:16:07.684114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.838 [2024-12-15 13:16:07.684147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:35:59.838 qpair failed and we were unable to recover it. 00:36:00.118 [2024-12-15 13:16:07.684325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.118 [2024-12-15 13:16:07.684359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.118 qpair failed and we were unable to recover it. 00:36:00.118 [2024-12-15 13:16:07.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.118 [2024-12-15 13:16:07.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.118 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.684761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.684795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.684979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.685195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.685405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.685634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.685791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.685955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.685989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.686125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.686158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.686292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.686326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.686553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.686626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.686765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.686802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.687072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.687107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.687295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.687330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.687530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.687565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.687749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.687782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.687985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.688021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.688158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.688390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.688423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.688607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.688639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.688815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.688865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.689127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.689160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.689288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.689321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.689491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.689533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.689719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.689752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.689930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.689967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.690100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.690133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.690320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.690354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.690550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.690585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.690737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.690911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.690945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.691078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.691112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.691298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.691331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.691570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.691603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.691791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.692029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.692063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.692186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.692220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.692424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.692457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.119 [2024-12-15 13:16:07.692636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.119 [2024-12-15 13:16:07.692669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.119 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.692882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.692917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.693173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.693206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.693381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.693547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.693579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.693753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.693787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.693927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.693961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.694146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.694179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.694372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.694405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.694588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.694627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.694805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.694854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.694987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.695022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.695208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.695281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.695525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.695770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.695805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.696924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.696960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.697086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.697117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.697256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.697505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.697538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.697799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.697845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.697953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.697986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.698177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.698210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.698390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.698634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.698669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.698862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.699020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.699054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.699320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.699352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.699468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.699629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.699662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.699783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.699816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.700022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.700056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.700296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.700329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.700454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.700486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.700603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.700636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.120 [2024-12-15 13:16:07.700743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.120 [2024-12-15 13:16:07.700783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.120 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.700913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.700948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.701129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.701163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.701292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.701327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.701512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.701544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.701823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.702060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.702107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.702312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.702344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.702656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.702691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.702818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.702861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.703890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.703925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.704862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.704897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.705074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.705106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.705287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.705321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.705506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.705540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.705650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.705682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.705834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.706025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.706063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.706255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.706288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.706527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.706561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.706690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.706723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.706988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.707022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.707194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.707227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.707340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.707372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.707628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.707662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.707895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.707930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.708061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.708093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.708283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.708317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.708531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.708794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.121 [2024-12-15 13:16:07.708853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.121 qpair failed and we were unable to recover it. 00:36:00.121 [2024-12-15 13:16:07.709041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.709076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.709257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.709409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.709442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.709634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.709666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.709909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.709943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.710125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.710159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.710339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.710374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.710557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.710590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.710777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.710811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.710968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.711256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.711291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.711406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.711440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.711622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.711655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.711917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.711953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.712150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.712183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.712382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.712416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.712541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.712574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.712690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.712722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.712964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.713122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.713256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.713534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.713671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.713942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.713976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.714084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.714116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.714254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.714487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.714521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.714717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.714750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.714880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.714914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.715035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.715068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.715278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.715312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.715423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.715457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.715572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.715605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.715780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.715812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.122 [2024-12-15 13:16:07.716899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.122 [2024-12-15 13:16:07.716933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.122 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.717108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.717140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.717407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.717442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.717561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.717595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.717714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.717747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.717968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.718107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.718322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.718602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.718764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.718962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.718995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.719239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.719273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.719481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.719516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.719644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.719677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.719782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.719815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.720944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.720979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.721080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.721111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.721284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.721316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.721499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.721531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.721649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.721684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.721876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.721910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.722024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.722056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.722171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.722204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.722373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.722404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.722669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.722703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.722890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.722924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.723134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.723353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.723551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.123 [2024-12-15 13:16:07.723583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.123 qpair failed and we were unable to recover it. 00:36:00.123 [2024-12-15 13:16:07.723842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.723876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.724934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.724966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.725156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.725190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.725377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.725410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.725708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.725747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.725866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.725902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.726079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.726111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.726321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.726491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.726525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.726640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.726673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.726793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.726836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.727076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.727110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.727302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.727336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.727467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.727499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.727632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.727665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.727796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.727835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.728059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.728348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.728500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.728715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.728869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.728993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.729876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.729993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.730027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.730143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.730176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.730417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.730449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.730715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.730748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.730921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.730961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.731205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.731237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.731482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.731515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.124 [2024-12-15 13:16:07.731697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.124 [2024-12-15 13:16:07.731729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.124 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.731912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.731946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.732067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.732100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.732367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.732400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.732572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.732605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.732779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.732812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.732931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.732964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.733088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.733121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.733248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.733280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.733391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.733424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.733598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.733630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.733819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.733884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.734240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.734273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.734515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.734549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.734798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.734841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.734963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.734997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.735190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.735223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.735367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.735400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.735577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.735610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.735800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.735844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.735963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.735996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.736185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.736400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.736433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.736636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.736669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.736943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.736978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.737091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.737124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.737370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.737403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.737522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.737554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.737798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.737838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.738947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.738980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.739088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.739121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.739316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.739348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.739599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.739672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.739814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.739871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.125 qpair failed and we were unable to recover it. 00:36:00.125 [2024-12-15 13:16:07.740066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.125 [2024-12-15 13:16:07.740101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.740321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.740354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.740559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.740799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.740849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.741100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.741133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.741402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.741436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.741635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.741669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.741793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.741840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.742035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.742068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.742269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.742302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.742431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.742464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.742741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.742783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.742992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.743028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.743274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.743306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.743416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.743448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.743948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.743984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.744209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.744242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.744346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.744380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.744507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.744540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.744837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.744871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.745112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.745145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.745434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.745468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.745689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.745722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.745931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.746172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.746480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.746513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.746706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.746738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.746925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.746961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.747204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.747237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.747421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.747453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.747589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.747622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.747738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.747771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.747962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.747997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.748183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.748216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.748324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.748357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.748463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.748496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.748623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.748656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.748810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.126 [2024-12-15 13:16:07.748902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.126 qpair failed and we were unable to recover it. 00:36:00.126 [2024-12-15 13:16:07.749187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.749261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.749475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.749513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.749774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.749807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.750067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.750282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.750653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.750791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.750999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.751166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.751390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.751548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.751693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.751916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.751950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.752148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.752180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.752378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.752410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.752600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.752633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.752764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.752797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.753004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.753036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.753217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.753249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.753457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.753720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.753752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.753891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.753926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.754169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.754201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.754399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.754432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.754560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.754594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.754790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.754838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.754980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.755013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.755148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.755179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.755424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.755457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.755588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.755622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.755866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.755900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.756094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.756127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.756385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.756418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.756664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.127 [2024-12-15 13:16:07.756811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.127 [2024-12-15 13:16:07.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.127 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.757964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.757999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.758142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.758176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.758286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.758318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.758505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.758539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.758664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.758697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.758933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.758969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.759141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.759174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.759381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.759413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.759657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.759689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.759927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.759960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.760153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.760184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.760370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.760402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.760510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.760548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.760747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.760781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.760918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.760951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.761214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.761246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.761365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.761396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.761545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.761736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.761767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.761894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.761927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.762100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.762132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.762247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.762279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.762450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.762480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.762741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.762775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.762972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.763004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.763313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.763346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.763522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.763554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.763814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.763878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.764050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.764083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.764265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.764297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.764486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.764518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.764691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.764724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.764897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.764931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.765107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.128 [2024-12-15 13:16:07.765139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.128 qpair failed and we were unable to recover it. 00:36:00.128 [2024-12-15 13:16:07.765326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.765357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.765482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.765516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.765624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.765655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.765771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.765804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.766099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.766268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.766423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.766587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.766794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.766991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.767024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.767143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.767176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.767477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.767725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.767759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.767952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.767986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.768205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.768238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.768420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.768453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.768634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.768667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.768770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.768801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.769074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.769107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.769350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.769421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.769633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.769670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.769885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.770060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.770092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.770273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.770306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.770514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.770688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.770721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.770904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.770940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.771123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.771157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.771345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.771377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.771499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.771772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.771805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.772055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.772089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.772221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.772271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.772444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.772478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.772665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.772697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.772936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.772970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.773210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.773242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.773489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.773522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.773811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.773858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.774044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.129 [2024-12-15 13:16:07.774076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.129 qpair failed and we were unable to recover it. 00:36:00.129 [2024-12-15 13:16:07.774196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.774228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.774466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.774500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.774644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.774676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.774811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.774857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.775051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.775083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.775323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.775356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.775642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.775675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.775807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.775852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.775960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.776266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.776406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.776439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.776705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.776738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.776878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.776913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.777126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.777160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.777335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.777367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.777559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.777591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.777804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.777846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.778089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.778121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.778306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.778338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.778651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.778723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.778864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.779153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.779187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.779449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.779483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.779724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.779756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.780023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.780057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.780310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.780343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.780603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.780637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.780879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.780914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.781152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.781185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.781358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.781392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.781571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.781604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.781848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.781883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.782127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.782171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.782351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.782384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.782594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.782627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.782806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.782852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.783073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.783107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.783236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.783268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.783463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.130 [2024-12-15 13:16:07.783496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.130 qpair failed and we were unable to recover it. 00:36:00.130 [2024-12-15 13:16:07.783683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.783716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.783844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.783879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.784063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.784095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.784235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.784269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.784461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.784494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.784723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.784757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.784937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.784972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.785093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.785128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.785405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.785438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.785621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.785655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.785895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.785930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.786115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.786149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.786438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.786472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.786645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.786679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.786970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.787128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.787274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.787483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.787665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.787940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.788104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.788177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.788373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.788410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.788631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.788664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.788904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.788939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.789133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.789167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.789365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.789398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.789508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.789542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.789740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.789773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.790012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.790156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.790190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.790394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.790429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.790555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.790589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.790796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.790844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.790964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.791008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.791149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.791441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.791475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.791643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.791676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.791790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.791841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.792020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.131 [2024-12-15 13:16:07.792054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.131 qpair failed and we were unable to recover it. 00:36:00.131 [2024-12-15 13:16:07.792344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.792377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.792555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.792588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.792765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.792798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.792984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.793196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.793401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.793558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.793719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.793956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.793992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.794234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.794267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.794444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.794477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.794610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.794644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.794763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.794797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.794913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.794948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.795083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.795117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.795415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.795542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.795576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.795773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.795806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.795952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.795988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.796188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.796222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.796438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.796472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.796616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.796652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.796770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.796803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.797043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.797199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.797353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.797580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.797799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.797997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.798214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.798247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.798373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.798408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.798670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.798985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.799243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.799277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.132 [2024-12-15 13:16:07.799479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.132 [2024-12-15 13:16:07.799518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.132 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.799633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.799667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.799845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.799879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.800006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.800040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.800182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.800215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.800409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.800443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.800559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.800593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.800838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.801064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.801201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.801234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.801349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.801382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.801570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.801603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.801781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.801814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.802040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.802074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.802320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.802354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.802478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.802511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.802690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.802723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.802971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.803006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.803188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.803221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.803342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.803376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.803515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.803548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.803809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.803853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.803977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.804859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.804976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.805857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.805982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.806016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.806263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.806297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.806475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.806508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.806699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.806732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.806975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.807011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.807118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.133 [2024-12-15 13:16:07.807152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.133 qpair failed and we were unable to recover it. 00:36:00.133 [2024-12-15 13:16:07.807412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.807630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.807664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.807788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.807821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.807960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.807994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.808167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.808200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.808323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.808356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.808550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.808583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.808767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.808800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.809074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.809108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.809322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.809502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.809535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.809790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.809835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.810124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.810346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.810506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.810667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.810872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.810907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.811172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.811206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.811394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.811427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.811607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.811640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.811766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.811800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.811996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.812031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.812210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.812247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.812365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.812398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.812575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.812609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.812821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.812867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.812992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.813030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.813229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.813367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.813399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.813530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.813563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.813746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.813780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.814046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.814243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.814276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.814498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.814532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.814806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.814873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.815115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.815148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.815322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.815356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.815546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.134 [2024-12-15 13:16:07.815580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.134 qpair failed and we were unable to recover it. 00:36:00.134 [2024-12-15 13:16:07.815756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.815790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.815924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.815959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.816157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.816191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.816374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.816407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.816581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.816614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.816718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.816752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.816935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.816971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.817164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.817198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.817401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.817435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.817554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.817588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.817836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.817871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.818052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.818085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.818257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.818291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.818565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.818599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.818719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.818753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.818892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.818928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.819051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.819085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.819265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.819298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.819488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.819521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.819655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.819689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.819863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.819898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.820049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.820273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.820426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.820741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.820983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.821018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.821193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.821226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.821359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.821398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.821575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.821609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.821783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.821816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.822865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.822899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.823015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.823048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.823237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.823270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.135 qpair failed and we were unable to recover it. 00:36:00.135 [2024-12-15 13:16:07.823394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.135 [2024-12-15 13:16:07.823427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.823635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.823669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.823878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.823913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.824162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.824195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.824311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.824344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.824468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.824501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.824607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.824640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.824777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.824811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.825027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.825061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.825182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.825215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.825366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.825626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.825660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.825867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.825903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.826094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.826128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.826299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.826332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.826543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.826723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.826757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.826867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.826902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.827115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.827148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.827290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.827546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.827772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.827804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.827996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.828030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.828204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.828237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.828427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.828460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.828704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.828738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.828888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.829072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.829107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.829211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.829245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.829426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.829466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.829673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.829707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.829846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.829882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.830097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.830131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.830241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.830275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.830399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.830432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.136 [2024-12-15 13:16:07.830608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.136 [2024-12-15 13:16:07.830641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.136 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.830821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.830868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.830987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.831927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.831963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.832152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.832185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.832314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.832516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.832550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.832738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.832771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.832890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.832924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.833135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.833168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.833345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.833378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.833624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.833658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.833852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.833888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.834064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.834097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.834271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.834417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.834449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.834641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.834675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.834925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.834959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.835142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.835176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.835349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.835383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.835651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.835685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.835927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.835963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.836079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.836112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.836232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.836265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.836387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.836421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.836551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.836583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.836769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.836801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.837033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.837176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.837388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.837696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.837879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.837992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.838024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.838203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.838236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.137 [2024-12-15 13:16:07.838360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.137 [2024-12-15 13:16:07.838393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.137 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.838517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.838549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.838736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.838770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.838967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.839002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.839243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.839278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.839399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.839431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.839562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.839595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.839801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.839843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.840949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.840982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.841133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.841352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.841505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.841658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.841807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.841987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.842021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.842221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.842255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.842369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.842402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.842583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.842616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.842800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.842856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.843876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.843911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.844091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.844124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.844370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.844403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.844599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.844632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.844778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.844908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.844941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.845116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.845162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.845270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.845303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.845485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.845517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.845701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.845734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.138 [2024-12-15 13:16:07.845909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.138 [2024-12-15 13:16:07.845945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.138 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.846952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.846986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.847964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.847996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.848125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.848156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.848299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.848331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.848443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.848475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.848615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.848649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.848845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.848880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.849146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.849179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.849366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.849398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.849660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.849694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.849939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.849974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.850164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.850196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.850349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.850383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.850554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.850588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.850840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.850875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.851004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.851037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.851280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.851313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.851430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.851461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.851719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.851951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.851985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.852105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.852139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.852253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.852286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.852499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.852532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.852787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.852820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.853011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.853045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.853184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.853225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.853400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.853433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.853621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.853871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.139 [2024-12-15 13:16:07.853906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.139 qpair failed and we were unable to recover it. 00:36:00.139 [2024-12-15 13:16:07.854051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.854084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.854276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.854309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.854503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.854536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.854660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.854691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.854796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.854838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.855103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.855137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.855329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.855363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.855478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.855692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.855724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.855852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.855886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.856019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.856053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.856240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.856274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.856533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.856565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.856761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.856794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.856908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.856941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.857128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.857161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.857350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.857384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.857566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.857599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.857711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.857744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.857930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.857965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.858177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.858212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.858333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.858367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.858483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.858516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.858647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.858679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.858868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.858903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.859098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.859130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.859254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.859286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.859529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.859561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.859686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.859718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.859899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.860196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.860228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.860338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.860371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.860546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.860579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.860820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.860865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.860976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.861007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.861135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.861169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.861454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.861494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.861704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.861737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.861981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.862016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.140 qpair failed and we were unable to recover it. 00:36:00.140 [2024-12-15 13:16:07.862315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.140 [2024-12-15 13:16:07.862348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.862469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.862775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.862808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.863020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.863053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.863302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.863335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.863518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.863553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.863687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.863719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.863912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.863945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.864185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.864218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.864336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.864368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.864497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.864531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.864737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.864770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.864900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.864935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.865046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.865080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.865275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.865309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.865522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.865556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.865679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.865712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.865913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.865948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.866153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.866186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.866316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.866350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.866615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.866650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.866846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.866881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.867052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.867085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.867271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.867305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.867468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.867685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.867723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.867923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.867960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.868200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.868233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.868363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.868396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.868517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.868552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.868729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.868763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.868891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.868926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.869057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.869091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.869281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.869314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.869438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.141 [2024-12-15 13:16:07.869471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.141 qpair failed and we were unable to recover it. 00:36:00.141 [2024-12-15 13:16:07.869710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.869745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.869919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.869954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.870059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.870100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.870217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.870437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.870472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.870576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.870609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.870886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.870921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.871054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.871088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.871337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.871372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.871613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.871648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.871768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.871802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.872060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.872096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.872636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.872816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.872863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.873071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.873105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.873329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.873363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.873607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.873641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.873814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.873858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.874072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.874107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.874226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.874260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.874380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.874413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.874545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.874578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.874850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.874885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.875000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.875211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.875245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.875371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.875405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.875539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.875573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.875864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.875903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.876049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.876083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.876351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.876384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.876565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.876598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.876729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.876762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.876940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.876974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.877166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.877381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.877414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.877652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.877686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.877799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.877843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.142 [2024-12-15 13:16:07.878036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.142 [2024-12-15 13:16:07.878070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.142 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.878269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.878302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.878489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.878522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.878710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.878743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.878853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.878892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.879022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.879056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.879172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.879206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.879395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.879602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.879636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.879823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.879870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.880135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.880167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.880294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.880327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.880515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.880549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.880731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.880765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.881063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.881098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.881279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.881313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.881490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.881525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.881649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.881682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.881883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.881919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.882044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.882204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.882238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.882448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.882482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.882660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.882693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.882806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.882848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.883029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.883060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.883176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.883392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.883426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.883680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.883713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.883905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.883940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.884134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.884167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.884322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.884443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.884477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.884600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.884633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.884905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.884939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.885049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.885083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.885259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.885294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.885498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.885685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.885868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.885904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.143 qpair failed and we were unable to recover it. 00:36:00.143 [2024-12-15 13:16:07.886076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.143 [2024-12-15 13:16:07.886110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.886225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.886258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.886474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.886507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.886691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.886725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.886909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.886945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.887133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.887173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.887372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.887406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.887540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.887573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.887767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.887802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.888050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.888083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.888258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.888290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.888471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.888505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.888687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.888720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.888848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.888884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.889221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.889254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.889423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.889457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.889631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.889665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.889848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.889882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.890001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.890034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.890146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.890178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.890350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.890384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.890510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.890543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.890861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.890897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.891107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.891253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.891548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.891683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.891861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.891976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.892008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.892252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.892284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.892464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.892496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.892767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.892800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.892925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.892956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.893199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.893234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.893390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.893634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.893667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.893863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.893896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.894072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.894104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.144 [2024-12-15 13:16:07.894280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.144 [2024-12-15 13:16:07.894313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.144 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.894549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.894582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.894823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.894868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.894987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.895136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.895353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.895558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.895708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.895923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.895958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.896148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.896181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.896352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.896594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.896627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.896834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.896869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.897061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.897095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.897228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.897259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.897467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.897660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.897903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.897938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.898049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.898081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.898271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.898304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.898496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.898531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.898713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.898747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.898923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.898959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.899073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.899105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.899287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.899321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.899586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.899619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.899742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.899775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.900033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.900068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.900177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.900209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.900405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.900437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.900614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.900646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.900838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.900873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.901142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.901175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.901432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.901507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.901666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.901703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.145 qpair failed and we were unable to recover it. 00:36:00.145 [2024-12-15 13:16:07.901817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.145 [2024-12-15 13:16:07.901873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.902093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.902128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.902370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.902407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.902645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.902678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.902880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.902916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.903098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.903130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.903362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.903613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.903646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.903767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.903798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.903995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.904029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.904227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.904259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.904369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.904401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.904652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.904685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.904885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.904919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.905182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.905214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.905338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.905542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.905576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.905711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.905744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.905881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.905914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.906032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.906064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.906180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.906211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.906424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.906690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.906724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.906852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.906886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.907065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.907100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.907273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.907311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.907433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.907467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.907587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.907621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.907813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.907859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.908045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.908078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.908251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.908426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.908459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.908698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.908732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.908910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.908944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.909182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.909216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.909429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.909462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.146 [2024-12-15 13:16:07.909661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.146 [2024-12-15 13:16:07.909693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.146 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.909822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.909877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.910017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.910049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.910176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.910211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.910455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.910489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.910715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.910748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.910928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.910963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.911174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.911208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.911428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.911462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.911584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.911615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.911722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.911756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.912044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.912267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.912418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.912560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.912765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.912976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.913016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.913200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.913233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.913424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.913457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.913650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.913681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.913909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.914087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.914119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.914381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.914415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.914527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.914561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.914803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.914846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.915070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.915244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.915467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.915615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.915767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.915979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.916255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.916289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.916556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.916588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.916713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.916746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.916856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.916890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.917095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.917128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.917239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.917270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.917509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.917541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.917782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.917815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.918009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.918042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.147 qpair failed and we were unable to recover it. 00:36:00.147 [2024-12-15 13:16:07.918166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.147 [2024-12-15 13:16:07.918199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.918331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.918364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.918600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.918633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.918808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.918852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.919047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.919080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.919279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.919313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.919483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.919516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.919645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.919676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.919878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.919914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.920056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.920089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.920207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.920239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.920427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.920459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.920662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.920694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.920877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.920913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.921086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.921120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.921303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.921336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.921452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.921483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.921685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.921720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.921913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.921948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.922135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.922167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.922370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.922403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.922520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.922552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.922728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.922760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.922880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.922914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.923086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.923120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.923290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.923322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.923513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.923545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.923756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.923789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.923907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.923941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.924060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.924092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.924266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.924298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.924478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.924511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.924758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.924791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.924993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.925029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.925220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.925253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.925456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.925489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.925607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.925639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.925908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.925942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.926206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.926241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.926435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.148 [2024-12-15 13:16:07.926467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.148 qpair failed and we were unable to recover it. 00:36:00.148 [2024-12-15 13:16:07.926636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.926668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.926848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.926882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.927066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.927099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.927294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.927327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.927454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.927494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.927696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.927728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.927925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.927959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.928135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.928167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.928355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.928387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.928570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.928603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.928849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.928883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.929151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.929183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.929448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.929480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.929610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.929643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.929850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.929885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.930056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.930089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.930279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.930311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.930489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.930814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.931113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.931146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.931330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.931364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.931527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.931652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.931686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.931936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.931970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.932213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.932246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.932513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.932547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.932757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.932790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.932979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.933013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.933205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.933238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.933458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.933747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.933781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.933994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.934034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.934254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.934428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.934462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.934704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.934737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.934977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.935012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.935127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.935158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.935297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.935331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.935521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.149 [2024-12-15 13:16:07.935554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.149 qpair failed and we were unable to recover it. 00:36:00.149 [2024-12-15 13:16:07.935723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.935757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.935949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.935982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.936152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.936184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.936362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.936397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.936585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.936618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.936735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.936767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.936962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.936996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.937130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.937161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.937344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.937510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.937542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.937732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.937764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.937958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.937992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.938184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.938216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.938459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.938493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.938748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.938780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.939030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.939065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.939305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.939338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.939589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.939622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.939868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.939904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.940024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.940064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.940182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.940214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.940465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.940498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.940736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.940768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.940996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.941031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.941246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.941280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.941417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.941450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.941724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.941757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.941976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.942011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.942193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.942226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.942415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.942448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.942595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.942778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.942810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.942996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.943029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.943258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.943330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.943530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.943568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.150 qpair failed and we were unable to recover it. 00:36:00.150 [2024-12-15 13:16:07.943750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.150 [2024-12-15 13:16:07.943784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.943945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.943980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.944179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.944213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.944350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.944384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.944560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.944593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.944846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.944881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.945060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.945094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.945273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.945305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.945412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.945445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.945718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.945751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.946046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.946239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.946281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.946466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.946500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.946626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.946660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.946801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.946844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.947025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.947058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.947229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.947261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.947506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.947539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.947729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.947761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.947949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.947990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.948118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.948152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.948279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.948313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.948516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.948549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.948788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.949009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.949042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.949237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.949270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.949446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.949479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.949598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.949631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.949893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.949929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.950215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.950249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.950430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.950464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.950657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.950689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.950877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.950912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.951051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.951334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.951367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.951639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.951888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.951922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.952120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.952153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.952323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.952395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.151 [2024-12-15 13:16:07.952672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.151 [2024-12-15 13:16:07.952710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.151 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.952896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.952932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.953047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.953080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.953300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.953334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.953533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.953566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.953701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.953733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.953989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.954250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.954284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.954418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.954451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.954637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.954669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.954780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.954813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.955000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.955033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.955164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.955206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.955389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.955422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.955662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.955694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.955894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.955930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.956100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.956132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.956316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.956350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.956593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.956626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.956754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.956788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.956992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.957027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.957231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.957264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.957448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.957480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.957700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.957881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.957916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.958114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.958147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.958341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.958374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.958486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.958519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.958646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.958901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.959093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.959125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.959361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.959393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.959632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.959665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.959861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.959897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.960102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.960135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.960336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.960369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.960607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.960640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.960862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.960897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.961095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.961128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.152 qpair failed and we were unable to recover it. 00:36:00.152 [2024-12-15 13:16:07.961418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.152 [2024-12-15 13:16:07.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.961584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.961617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.961845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.961880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.962121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.962154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.962364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.962396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.962520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.962550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.962805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.963068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.963101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.963301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.963334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.963527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.963560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.963823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.963869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.964052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.964085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.964331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.964364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.964556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.964601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.964796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.964853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.965098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.965130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.965369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.965402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.965599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.965632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.965754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.965786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.965988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.966024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.966239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.966274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.966533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.966565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.966802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.966846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.967042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.967074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.967206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.967238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.967420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.967453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.967693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.967726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.967905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.967940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.968069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.968101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.968314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.968348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.968617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.968651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.968935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.968970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.969157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.969371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.969404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.969597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.969629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.969797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.969838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.970109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.153 [2024-12-15 13:16:07.970142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.153 qpair failed and we were unable to recover it. 00:36:00.153 [2024-12-15 13:16:07.970333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.970366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.970500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.970533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.970712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.970744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.970885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.970921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.971099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.971133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.971318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.971351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.971539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.971572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.971779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.971812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.971946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.972174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.972207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.972317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.972536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.972568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.972765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.972798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.973055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.973090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.973280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.973313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.973514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.973546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.973743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.973781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.973988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.974022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.974204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.974237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.974360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.974396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.974633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.974664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.974845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.974879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.975067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.975100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.975287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.975320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.975486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.975519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.975717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.975750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.975862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.975894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.976036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.976069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.976544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.976864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.976900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.977173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.977352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.977386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.977500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.154 [2024-12-15 13:16:07.977532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.154 qpair failed and we were unable to recover it. 00:36:00.154 [2024-12-15 13:16:07.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.977696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.977939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.977974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.978150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.978421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.978454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.978585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.978617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.978802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.978845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.979031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.979063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.979190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.979223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.979515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.979547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.979664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.979698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.979914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.979949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.980150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.980183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.980370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.980403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.980585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.980617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.980772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.980915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.980949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.981188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.981221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.981405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.981438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.981632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.981666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.981838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.981967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.982001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.982186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.982220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.982392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.982624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.982657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.982760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.982978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.983012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.983252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.983285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.983477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.983510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.983698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.983731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.983984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.984019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.984157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.984191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.984363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.984395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.984588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.984621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.984822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.985057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.155 [2024-12-15 13:16:07.985091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.155 qpair failed and we were unable to recover it. 00:36:00.155 [2024-12-15 13:16:07.985259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.985292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.985471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.985504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.985619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.985652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.985914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.985950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.986129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.986161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.986335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.986368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.986625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.986657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.986850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.986884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.987011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.987044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.987174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.987205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.987345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.987380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.987584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.987617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.987820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.987866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.988058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.988092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.988289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.988322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.988434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.988467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.988702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.988735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.988908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.988944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.989157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.989190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.989311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.989344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.989554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.989587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.989756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.989789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.990044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.990218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.990451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.990674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.990868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.990988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.991022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.991236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.991269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.991521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.991555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.991691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.991724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.991966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.992109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.992323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.992472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.992704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.992846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.992879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.993130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.993163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.993347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.993380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.156 qpair failed and we were unable to recover it. 00:36:00.156 [2024-12-15 13:16:07.993556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.156 [2024-12-15 13:16:07.993589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.993717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.993750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.994004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.994039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.994235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.994268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.994443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.994477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.994606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.994639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.994775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.995081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.995115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.995297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.995331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.995505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.995537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.995718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.995751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.995941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.995976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.996169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.996201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.996388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.996422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.996608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.996761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.996799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.997029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.997064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.997243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.997276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.997466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.997499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.997737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.997770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.997901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.997936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.998202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.998472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.998504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.998677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.998710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.998847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.998882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.998990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.999024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.999211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.999244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.999545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.999670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.999703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:07.999899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:07.999935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.000215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.000248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.000439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.000473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.000599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.000631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.000869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.000903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.001043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.001075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.001186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.001220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.001393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.001426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.001545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.157 [2024-12-15 13:16:08.001578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.157 qpair failed and we were unable to recover it. 00:36:00.157 [2024-12-15 13:16:08.001765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.001798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.001989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.002223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.002379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.002544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.002692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.002918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.002953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.003193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.003226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.003438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.003472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.003664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.003697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.003876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.158 [2024-12-15 13:16:08.003910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.158 qpair failed and we were unable to recover it. 00:36:00.158 [2024-12-15 13:16:08.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.004076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.004288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.004324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.004442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.004475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.004611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.004644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.004774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.004808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.005007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.005040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.005149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.005187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.005378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.005412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.005688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.005721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.005915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.005950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.006074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.006107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.006302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.437 [2024-12-15 13:16:08.006335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-15 13:16:08.006513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.006728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.006761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.007047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.007593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.007797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.007991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.008023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.008222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.008255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.008469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.008501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.008777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.008809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.009018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.009052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.009161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.009193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.009385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.009418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.009632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.009664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.009848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.009884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.010086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.010119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.010246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.010279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.010410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.010443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.010756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.010789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.010929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.010964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.011159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.011193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.011435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.011468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.011662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.011695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.011887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.011922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.012043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.012076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.012267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.012300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.012438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.012472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.012645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.012678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.012787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.012818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.013083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.013117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.013254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.013287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.013404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.013437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.013633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.013666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.013906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.013948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.014138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.014170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.014386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.014420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.014679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.014712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.014984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.438 [2024-12-15 13:16:08.015018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-15 13:16:08.015191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.015224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.015421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.015455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.015709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.015741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.015952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.015988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.016177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.016210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.016454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.016487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.016613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.016647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.016843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.016879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.017154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.017268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.017302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.017424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.017457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.017584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.017617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.017856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.017891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.018079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.018297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.018329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.018570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.018603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.018845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.018879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.019000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.019033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.019302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.019335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.019542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.019575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.019757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.019790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.019923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.019957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.020179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.020213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.020465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.020685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.020718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.020902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.020938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.021068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.021101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.021293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.021326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.021440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.021473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.021715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.021747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.021915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.021950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.022135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.022168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.022271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.022305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.022500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.022768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.022801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.022995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.023035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.023225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.023258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.023442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.023477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.023681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.439 [2024-12-15 13:16:08.023715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-15 13:16:08.023953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.023989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.024234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.024267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.024386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.024419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.024627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.024661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.024863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.024898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.025103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.025348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.025381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.025635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.025668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.025943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.025977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.026219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.026251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.026374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.026408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.026618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.026651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.026789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.026822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.027018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.027051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.027237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.027270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.027400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.027433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.027629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.027661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.027854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.027889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.028137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.028170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.028359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.028391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.028566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.028599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.028729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.028763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.029860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.029992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.030025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.030264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.030379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.030410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.030601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.030635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.030875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.030911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.031099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.031132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.031320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.031354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.031602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.031636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.031835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.031874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.031997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.032031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.032235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.440 [2024-12-15 13:16:08.032269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.440 qpair failed and we were unable to recover it. 00:36:00.440 [2024-12-15 13:16:08.032551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.032583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.032772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.032806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.033072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.033106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.033243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.033276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.033492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.033524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.033795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.033979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.034014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.034223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.034374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.034405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.034667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.034702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.034943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.034978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.035159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.035194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.035383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.035416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.035654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.035688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.035809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.035851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.036064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.036255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.036288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.036476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.036509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.036699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.036733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.037053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.037204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.037370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.037671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.037835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.038004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.038202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.038235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.038470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.038503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.038673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.038707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.038876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.038911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.039094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.039127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.039242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.039276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.039452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.039486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.039669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.039703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.039886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.039920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.040106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.040139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.040339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.040372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.040635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.040667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.040856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.040896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.041103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.041137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.441 [2024-12-15 13:16:08.041324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.441 [2024-12-15 13:16:08.041357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.441 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.041552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.041585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.041837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.041872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.042111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.042144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.042446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.042563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.042595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.042857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.042892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.043010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.043043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.043304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.043336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.043532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.043795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.043836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.044143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.044325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.044358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.044598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.044631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.044871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.044905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.045039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.045073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.045265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.045298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.045427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.045461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.045727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.045761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.045947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.045981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.046194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.046228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.046396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.046430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.046652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.046685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.046925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.046960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.047071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.047104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.047349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.047383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.047599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.047838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.048168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.048201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.048444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.048476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.048667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.048699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.048876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.048910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.049025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.049058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.049183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.049216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.442 qpair failed and we were unable to recover it. 00:36:00.442 [2024-12-15 13:16:08.049387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.442 [2024-12-15 13:16:08.049420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.049602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.049635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.049837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.049872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.050032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.050273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.050311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.050493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.050527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.050846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.050881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.051120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.051153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.051357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.051389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.051572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.051605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.051870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.051906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.052152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.052184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.052396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.052429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.052645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.052865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.052901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.053111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.053145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.053425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.053732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.053765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.053999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.054035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.054274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.054307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.054497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.054530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.054654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.054687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.054888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.054922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.055065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.055098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.055360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.055393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.055516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.055550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.056039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.056074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.056249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.056283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.056407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.056655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.056688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.056939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.056976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.057162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.057196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.057311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.057344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.057601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.057634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.057833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.057867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.058049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.058083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.058197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.058230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.058404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.058438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.058608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.443 [2024-12-15 13:16:08.058642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.443 qpair failed and we were unable to recover it. 00:36:00.443 [2024-12-15 13:16:08.058778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.058812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.059081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.059115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.059356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.059389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.059630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.059663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.059842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.059882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.060099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.060132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.060247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.060280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.060544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.060577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.060765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.060798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.060968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.061089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.061122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.061375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.061407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.061671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.061705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.061928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.061963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.062085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.062117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.062308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.062341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.062453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.062487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.062682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.062716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.062905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.062940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.063236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.063360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.063394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.063505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.063538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.063744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.063778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.064055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.064089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.064267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.064543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.064576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.064699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.064733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.064995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.065030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.065242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.065276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.065523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.065557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.065740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.065772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.065937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.065972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.066092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.066126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.066252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.066285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.066477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.066511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.066685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.066718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.066909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.066944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.067420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.444 [2024-12-15 13:16:08.067453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.444 qpair failed and we were unable to recover it. 00:36:00.444 [2024-12-15 13:16:08.067579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.067612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.067784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.067817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.068000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.068034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.068299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.068331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.068515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.068548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.068810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.068859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.069111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.069144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.069321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.069353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.069487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.069521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.069695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.069728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.069914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.069948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.070215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.070248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.070516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.070549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.070665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.070699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.070955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.070990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.071167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.071201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.071402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.071435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.071568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.071602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.071844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.071879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.072097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.072315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.072348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.072631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.072665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.072866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.072901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.073093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.073127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.073311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.073344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.073542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.073576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.073767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.073800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.074045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.074208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.074242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.074445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.074572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.074605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.074804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.074848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.075117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.075151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.075333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.075367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.075541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.075574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.075843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.075878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.076090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.076123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.076365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.076399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.076583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.445 [2024-12-15 13:16:08.076885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.445 [2024-12-15 13:16:08.076921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.445 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.077161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.077194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.077318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.077351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.077567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.077601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.077706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.077740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.077912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.077946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.078132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.078172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.078309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.078344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.078616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.078650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.078841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.078876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.079143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.079176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.079295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.079328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.079541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.079575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.079782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.079815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.080027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.080060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.080187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.080222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.080431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.080465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.080704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.080740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.080866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.080900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.081018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.081051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.081245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.081279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.081464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.081498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.081687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.081720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.081910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.082079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.082112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.082299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.082332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.082517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.082550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.082677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.082820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.082863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.083067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.083101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.083235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.083465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.083498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.083620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.083654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.083818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.083910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.084117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.084156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.446 qpair failed and we were unable to recover it. 00:36:00.446 [2024-12-15 13:16:08.084446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.446 [2024-12-15 13:16:08.084481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.084731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.084864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.084900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.085165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.085200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.085386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.085419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.085621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.085655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.085927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.085963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.086083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.086116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.086304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.086456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.086489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.086663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.086696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.086971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.087168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.087202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.087402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.087434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.087605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.087836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.087869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.088945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.088980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.089156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.089191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.089307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.089342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.089612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.089645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.090049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.090083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.090271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.090304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.090480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.090514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.090632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.090664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.090778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.090813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.091944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.091978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.092168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.092200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.092453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.092661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.447 [2024-12-15 13:16:08.092695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.447 qpair failed and we were unable to recover it. 00:36:00.447 [2024-12-15 13:16:08.092906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.092941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.093062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.093096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.093221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.093253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.093491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.093524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.093724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.093757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.093936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.093970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.094165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.094199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.094478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.094511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.094620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.094653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.094780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.094812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.095011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.095043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.095233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.095267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.095391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.095431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.095612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.095646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.095911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.095947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.096133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.096166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.096359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.096392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.096582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.096613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.096737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.096769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.096926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.096962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.097202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.097235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.097498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.097531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.097645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.097677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.097793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.097834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.097975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.098201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.098355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.098564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.098928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.098963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.099086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.099118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.099235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.099269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.099471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.099505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.099615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.099647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.099863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.099898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.100030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.100063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.100237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.100270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.100530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.100562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.100742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.100914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.448 [2024-12-15 13:16:08.100954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.448 qpair failed and we were unable to recover it. 00:36:00.448 [2024-12-15 13:16:08.101061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.101094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.101226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.101259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.101446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.101480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.101722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.101757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.101940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.101975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.102118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.102151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.102426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.102609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.102644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.102752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.102785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.103040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.103074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.103206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.103237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.103432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.103465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.103751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.103783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.103932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.103967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.104091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.104124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.104320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.104351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.104464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.104494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.104697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.104729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.104847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.104881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.105037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.105199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.105360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.105598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.105994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.106222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.106255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.106449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.106481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.106614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.106647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.106855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.106890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.107002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.107035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.107272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.107305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.107489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.107523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.107817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.107859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.108861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.108896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.449 [2024-12-15 13:16:08.109008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.449 [2024-12-15 13:16:08.109044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.449 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.109206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.109281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.109549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.109587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.109846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.109883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.110078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.110111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.110394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.110427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.110644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.110678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.110870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.110906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.111144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.111177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.111297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.111333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.111511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.111544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.111791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.111848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.111985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.112203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.112235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.112483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.112526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.112702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.112922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.112957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.113112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.113145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.113333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.113366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.113504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.113538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.113650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.113683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.113877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.113912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.114089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.114122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.114250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.114283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.114488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.114522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.114717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.114751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.114949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.114985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.115196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.115231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.115446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.115479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.115604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.115885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.115919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.116133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.116165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.116347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.116380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.116505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.116539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.116727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.116760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.450 [2024-12-15 13:16:08.116952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.450 [2024-12-15 13:16:08.116987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.450 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.117160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.117192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.117373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.117517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.117552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.117732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.117765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.118051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.118322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.118396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.118564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.118685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.118719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.118850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.118886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.119065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.119098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.119225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.119257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.119375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.119407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.119586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.119619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.119820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.119868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.120108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.120140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.120266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.120297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.120416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.120449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.120649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.120682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.120860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.120904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.121076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.121109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.121377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.121501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.121534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.121728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.121761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.121899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.121933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.122143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.122176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.122418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.122628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.122662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.122867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.122902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.123104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.123136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.123265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.123300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.123539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.123572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.123699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.123732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.123927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.123963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.124204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.124236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.124434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.124467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.124656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.124689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.124807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.124848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.125042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.125256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.451 [2024-12-15 13:16:08.125290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.451 qpair failed and we were unable to recover it. 00:36:00.451 [2024-12-15 13:16:08.125418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.125451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.125625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.125657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.125919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.125953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.126205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.126239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.126450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.126483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.126600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.126633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.126877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.126914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.127123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.127359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.127391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.127614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.127786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.127819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.128004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.128037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.128213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.128246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.128510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.128542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.128742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.128775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.129847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.129881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.130065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.130098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.130336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.130370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.130559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.130591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.130805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.130851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.131065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.131098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.131224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.131258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.131379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.131412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.131525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.131558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.131736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.131769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.132061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.132299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.132332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.132528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.132562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.132697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.132924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.132958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.133233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.133480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.133514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.452 [2024-12-15 13:16:08.134026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.452 qpair failed and we were unable to recover it. 00:36:00.452 [2024-12-15 13:16:08.134240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.134273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.134486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.134672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.134704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.134899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.134934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.135063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.135096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.135254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.135431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.135464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.135706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.135779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.135957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.136030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.136245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.136443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.136662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.136697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.136851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.136887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.137070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.137104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.137279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.137311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.137455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.137489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.137687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.137720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.137918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.137952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.138194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.138227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.138409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.138442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.138622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.138654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.138927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.138964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.139104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.139137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.139260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.139292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.139473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.139505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.139625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.139658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.139867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.139901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.140082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.140256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.140288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.140411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.140443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.140694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.140728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.140903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.140937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.141120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.141153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.141360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.141392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.141570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.141603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.141724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.141756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.141871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.141904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.142177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.142210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.142384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.142415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.453 [2024-12-15 13:16:08.142530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.453 [2024-12-15 13:16:08.142562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.453 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.143971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.144030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.144315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.144351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.144601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.144636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.144754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.144988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.145800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.146166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.146325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.146528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.146739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.146907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.146941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.147073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.147232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.147267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.147445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.147478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.147654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.147686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.147810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.148112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.148148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.148389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.148422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.148542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.148576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.148760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.148793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.149068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.149281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.149498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.149533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.149656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.149690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.149800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.149845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.150115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.150148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.150282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.150316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.150517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.150551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.150738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.150771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.150994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.151034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.151168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.151200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.151314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.151347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.151556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.151589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.151800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.454 [2024-12-15 13:16:08.151841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.454 qpair failed and we were unable to recover it. 00:36:00.454 [2024-12-15 13:16:08.151958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.151991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.152202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.152234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.152356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.152388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.152604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.152636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.152845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.152878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.153963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.153997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.154139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.154517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.154657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.154876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.155029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.155238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.155272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.155409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.155442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.155651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.155684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.155876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.155913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.156056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.156191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.156224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.156402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.156436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.156660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.156870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.156903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.157050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.157277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.157435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.157601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.157756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.157997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.158032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.158208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.158242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.158415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.158448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.455 [2024-12-15 13:16:08.158634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.455 [2024-12-15 13:16:08.158667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.455 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.158859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.158896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.159846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.159880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.160140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.160175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.160351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.160383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.160658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.160787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.160820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.160950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.160983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.161168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.161201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.161418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.161662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.161694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.161869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.161905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.162033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.162066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.162216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.162336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.162368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.162618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.162652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.162922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.162957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.163221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.163253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.163377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.163411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.163535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.163566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.163748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.163782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.163975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.164124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.164273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.164492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.164714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.164858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.164893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.165090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.165124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.165304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.165337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.165509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.165541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.165662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.165693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.165867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.165915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.166102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.166134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.166312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.456 [2024-12-15 13:16:08.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.456 [2024-12-15 13:16:08.166553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.456 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.166677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.166708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.166941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.167015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.167227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.167265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.167533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.167568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.167851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.167889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.168066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.168100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.168275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.168309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.168529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.168564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.168741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.168774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.168915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.168950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.169156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.169189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.169399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.169433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.169615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.169650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.169768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.169803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.169999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.170154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.170450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.170607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.170775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.170943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.170979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.171160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.171193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.171304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.171338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.171513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.171547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.171806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.171850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.172044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.172077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.172253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.172288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.172473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.172506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.172759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.172793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.173000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.173036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.173229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.173263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.173493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.173736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.173770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.173897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.173932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.174085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.174246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.174426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.174635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.174860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.174984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.457 [2024-12-15 13:16:08.175018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.457 qpair failed and we were unable to recover it. 00:36:00.457 [2024-12-15 13:16:08.175188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.175222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.175343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.175376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.175594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.175629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.175733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.175764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.175942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.175977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.176093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.176127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.176316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.176350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.176457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.176490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.176670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.176703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.176950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.176986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.177108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.177140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.177279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.177312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.177437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.177471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.177938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.177974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.178181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.178221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.178341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.178376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.178556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.178589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.178714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.178748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.178934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.178969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.179173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.179425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.179459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.179642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.179676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.179866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.179901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.180094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.180333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.180367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.180553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.180586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.180708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.180742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.180853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.180889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.181087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.181122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.181226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.181260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.181527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.181561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.181839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.181875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.182068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.182102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.182289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.182324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.182461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.182495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.182612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.182645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.182798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.182878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.183044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.183083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.458 qpair failed and we were unable to recover it. 00:36:00.458 [2024-12-15 13:16:08.183207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.458 [2024-12-15 13:16:08.183241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.183361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.183395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.183570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.183606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.183855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.183901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.184048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.184082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.184259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.184293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.184408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.184441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.184568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.184601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.184874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.184916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.185180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.185213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.185387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.185420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.185527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.185560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.185746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.185970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.186004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.186131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.186165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.186359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.186392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.186567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.186600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.186804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.186850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.187027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.187060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.187302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.187336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.187536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.187572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.187685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.187716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.187893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.187928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.188134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.188167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.188352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.188384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.188624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.188657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.188839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.189057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.189093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.189369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.189402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.189593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.189627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.189804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.189853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.190051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.190084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.190345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.190377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.190567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.190600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.190780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.190812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.459 [2024-12-15 13:16:08.190995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.459 [2024-12-15 13:16:08.191028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.459 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.191214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.191249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.191508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.191540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.191650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.191682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.191877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.191911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.192094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.192126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.192309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.192343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.192639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.192822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.192868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.193117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.193150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.193329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.193363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.193707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.193846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.193880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.194144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.194177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.194462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.194496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.194764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.194799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.195004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.195039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.195210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.195244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.195451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.195653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.195687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.195853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.195887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.196149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.196182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.196373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.196411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.196606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.196639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.196838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.196873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.197072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.197105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.197327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.197362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.197488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.197522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.197706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.197739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.197933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.197968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.198156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.198190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.198354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.198388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.198542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.198611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.198849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.198920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.199125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.199162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.199386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.199420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.199608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.199642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.199837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.199874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.200082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.200115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.200379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.200412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.460 qpair failed and we were unable to recover it. 00:36:00.460 [2024-12-15 13:16:08.200650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.460 [2024-12-15 13:16:08.200683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.200868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.200903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.201140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.201173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.201278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.201310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.201420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.201453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.201637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.201669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.201853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.201887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.202060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.202092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.202202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.202234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.202424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.202464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.202672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.202704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.202816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.203092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.203300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.203332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.203511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.203544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.204115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.204148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.204345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.204377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.204673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.204805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.204854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.205046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.205078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.205272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.205305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.205510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.205543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.205724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.205756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.205926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.205961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.206143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.206176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.206412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.206445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.206619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.206651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.206899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.206934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.207118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.207150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.207322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.207354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.207477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.207510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.207702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.207736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.208146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.208179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.208367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.208400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.208540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.208572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.208691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.208724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.208897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.208931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.461 qpair failed and we were unable to recover it. 00:36:00.461 [2024-12-15 13:16:08.209115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.461 [2024-12-15 13:16:08.209148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.209322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.209355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.209477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.209509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.209616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.209649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.209769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.209802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.210006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.210041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.210231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.210264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.210456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.210488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.210612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.210645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.210768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.210801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.211007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.211048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.211229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.211263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.211367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.211400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.211583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.211614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.211791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.211823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.212092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.212127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.212379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.212412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.212588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.212620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.212790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.212823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.213038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.213071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.213262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.213295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.213497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.213531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.213704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.213736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.213921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.213956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.214154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.214187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.214305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.214337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.214509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.214752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.214785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.215901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.215934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.216202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.216234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.216367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.216401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.216610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.216843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.216878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.216999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.217032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.462 [2024-12-15 13:16:08.217156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.462 [2024-12-15 13:16:08.217188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.462 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.217363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.217396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.217658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.217690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.217881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.217914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.218040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.218073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.218245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.218277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.218490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.218523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.218655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.218867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.218902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.219020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.219053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.219231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.219262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.219371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.219410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.219654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.219687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.219873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.219906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.220126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.220376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.220408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.220617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.220649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.220916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.220950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.221139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.221171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.221298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.221331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.221443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.221475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.221668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.221873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.221906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.222050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.222215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.222467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.222629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.222791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.222981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.223020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.223227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.223427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.223460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.223637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.223671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.223785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.223817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.224079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.224112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.224295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.224529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.224562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.224806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.224848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.224984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.225017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.225269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.225341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.225509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.463 [2024-12-15 13:16:08.225549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.463 qpair failed and we were unable to recover it. 00:36:00.463 [2024-12-15 13:16:08.225672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.225706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.225852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.225887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.226130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.226166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.226344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.226377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.226617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.226653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.226792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.226843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.227109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.227144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.227278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.227313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.227438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.227472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.227605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.227642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.227903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.227938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.228100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.228277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.228492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.228717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.228877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.228981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.229013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.229139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.229173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.229472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.229525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.229711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.229750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.229977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.230169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.230203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.230309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.230517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.230550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.230724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.230756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.230952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.230986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.231114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.231147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.231413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.231445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.231620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.231653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.231863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.231897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.232038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.232144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.232177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.232288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.232321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.232520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.232554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.464 [2024-12-15 13:16:08.232748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.464 [2024-12-15 13:16:08.232780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.464 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.232921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.232955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.233059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.233092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.233270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.233303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.233881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.233921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.234103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.234135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.234380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.234413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.234654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.234689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.234863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.234897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.235070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.235103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.235233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.235267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.235445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.235479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.235693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.235725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.235929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.236178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.236211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.236336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.236372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.236553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.236591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.236793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.236835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.236953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.237277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.237309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.237492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.237525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.237700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.237732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.237987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.238022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.238202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.238234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.238442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.238561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.238593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.238845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.238880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.239130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.239162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.239353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.239386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.239529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.239561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.239681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.239714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.239864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.239898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.240043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.240268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.240409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.240567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.240793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.240978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.241050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.465 [2024-12-15 13:16:08.241243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.465 qpair failed and we were unable to recover it. 00:36:00.465 [2024-12-15 13:16:08.241510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.241543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.241801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.241844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.242089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.242124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.242311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.242346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.242616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.242653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.242788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.243038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.243071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.243228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.243343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.243376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.243585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.243617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.243835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.244031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.244064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.244267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.244300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.244428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.244633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.244666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.244876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.244910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.245015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.245047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.245284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.245590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.245621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.245743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.245776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.245916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.245950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.246082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.246115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.246351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.246383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.246509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.246542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.246810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.246853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.246991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.247023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.247272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.247305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.247571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.247604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.247817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.247870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.248049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.248083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.248348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.248380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.248498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.248530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.248735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.248768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.248891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.248926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.249114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.249146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.249330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.249362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.249645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.249678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.249816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.249860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.250053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.466 [2024-12-15 13:16:08.250085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.466 qpair failed and we were unable to recover it. 00:36:00.466 [2024-12-15 13:16:08.250228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.250262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.250438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.250469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.250733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.251018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.251297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.251329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.251451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.251484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.251663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.251696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.251893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.251928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.252116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.252148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.252323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.252356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.252580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.252692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.252724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.252910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.252944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.253204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.253236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.253476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.253620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.253653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.253935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.253968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.254090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.254122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.254245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.254282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.254409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.254442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.254634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.254666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.254853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.254887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.255085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.255118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.255234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.255265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.255529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.255561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.255747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.255779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.255992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.256025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.256266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.256299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.256579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.256611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.256794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.256838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.257052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.257085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.257272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.257304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.257550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.257584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.257706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.257738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.257984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.258019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.258292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.258324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.258532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.258564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.258687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.258719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.258853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.467 [2024-12-15 13:16:08.258886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.467 qpair failed and we were unable to recover it. 00:36:00.467 [2024-12-15 13:16:08.259166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.259199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.259313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.259346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.259498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.259786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.260068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.260102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.260357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.260389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.260511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.260550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.260728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.260760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.260935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.260968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.261139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.261172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.261360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.261392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.261515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.261548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.261766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.261798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.261927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.261960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.262103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.262135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.262307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.262339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.262513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.262546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.262742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.262774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.262907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.262940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.263147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.263180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.263307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.263340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.263531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.263565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.263760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.263793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.264019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.264054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.264259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.264291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.264502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.264690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.264722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.264909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.264943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.265085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.265117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.265301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.265333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.265580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.265612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.265795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.265834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.266020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.266052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.266190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.266222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.266464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.468 [2024-12-15 13:16:08.266497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.468 qpair failed and we were unable to recover it. 00:36:00.468 [2024-12-15 13:16:08.266682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.266714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.266842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.266876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.267069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.267101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.267284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.267317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.267498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.267530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.267724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.267756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.268056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.268091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.268346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.268378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.268648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.268681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.268951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.268985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.269103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.269135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.269311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.269349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.269536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.269569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.269823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.269863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.270021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.270053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.270167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.270202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.270442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.270684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.270716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.270911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.270946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.271088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.271120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.271314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.271348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.271567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.271681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.271714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.271953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.271988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.272108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.272140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.272430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.272464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.272580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.272613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.272896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.273080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.273113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.273358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.273391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.273572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.273790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.273823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.274012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.274045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.274292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.274325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.274538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.274570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.274753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.274786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.275012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.275282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.275316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.275433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.275465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.469 qpair failed and we were unable to recover it. 00:36:00.469 [2024-12-15 13:16:08.275704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.469 [2024-12-15 13:16:08.275736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.275917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.276193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.276226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.276404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.276437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.276614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.276646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.276774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.276807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.276998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.277031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.277292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.277325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.277450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.277482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.277619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.277652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.277842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.277877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.278120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.278151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.278268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.278305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.278423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.278456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.278712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.278745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.278933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.278969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.279139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.279171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.279359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.279390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.279610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.279848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.280089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.280121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.280307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.280339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.280601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.280633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.280809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.280961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.281247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.281279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.281406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.281542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.281572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.281698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.281730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.281844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.281877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.282003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.282035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.282228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.282464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.282497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.282758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.282791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.283050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.283084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.283277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.283310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.283436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.283468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.283650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.283683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.283799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.283854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.470 [2024-12-15 13:16:08.284132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.470 [2024-12-15 13:16:08.284164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.470 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.284377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.284410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.284528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.284823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.284871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.285081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.285113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.285374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.285407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.285673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.285706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.285901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.285936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.286177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.286210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.286457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.286489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.286695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.286728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.286939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.286974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.287112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.287145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.287280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.287319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.287514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.287547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.287722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.287755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.287863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.287897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.288095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.288129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.288371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.288405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.288668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.288702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.288906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.288940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.289119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.289152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.289343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.289376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.289563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.289595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.289788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.289821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.290022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.290054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.290227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.290260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.290396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.290429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.290626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.290786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.290819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.291025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.291058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.291340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.291373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.291497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.291530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.291770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.291803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.292046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.292081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.292377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.292410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.292517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.292550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.292776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.292985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.293285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.471 [2024-12-15 13:16:08.293318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.471 qpair failed and we were unable to recover it. 00:36:00.471 [2024-12-15 13:16:08.293535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.293568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.293686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.293719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.293946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.294217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.294250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.294492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.294524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.294699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.294732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.294865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.294898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.295138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.295170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.295304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.295337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.295545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.295578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.295844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.295876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.296006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.296038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.296318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.296351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.296612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.296916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.296950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.297228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.297259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.297451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.297612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.297643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.297823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.297868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.298043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.298076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.298249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.298281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.298520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.298552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.298691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.298724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.299009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.299134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.299167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.299377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.299624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.299657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.299869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.299903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.300873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.300907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.301115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.301147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.472 [2024-12-15 13:16:08.301320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.472 [2024-12-15 13:16:08.301353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.472 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.301482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.301514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.301722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.301756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.301929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.301963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.302102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.302134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.302330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.302362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.302470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.302503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.302768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.302800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.302990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.303025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.303238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.303271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.303453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.303485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.303594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.303626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.303866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.303900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.304107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.304139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.304312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.304345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.304545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.304767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.304799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.304999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.305227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.305265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.305439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.305471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.305688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.305720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.305961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.305994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.306178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.306211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.306382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.306415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.306600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.306632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.306756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.306788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.307058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.307268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.307301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.307484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.307517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.307804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.307863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.308113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.308145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.308259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.308290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.308487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.308520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.308624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.308657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.308915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.308949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.309058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.309088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.309261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.309293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.309514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.473 [2024-12-15 13:16:08.309641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.473 [2024-12-15 13:16:08.309673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.473 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.309852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.309886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.310060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.310093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.310211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.310243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.310447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.310479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.310668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.310701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.310910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.310943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.311217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.311250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.311442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.311474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.311737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.311770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.311992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.312026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.312220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.312253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.312547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.312580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.312720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.312753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.312950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.312985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.313223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.313255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.313426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.313458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.313581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.313614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.313735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.313768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.313887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.313920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.314103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.314355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.314388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.314565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.314597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.314782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.314814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.315117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.315151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.315412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.315444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.315578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.315611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.315839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.316014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.316046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.316218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.316250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.316487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.316520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.316637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.316668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.316874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.316909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.317151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.317184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.317446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.317480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.317733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.317766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.318123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.318157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.318449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.318482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.318620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.474 [2024-12-15 13:16:08.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.474 qpair failed and we were unable to recover it. 00:36:00.474 [2024-12-15 13:16:08.318844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.318876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.319016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.319049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.319216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.319248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.319382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.319415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.319651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.319683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.319869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.320094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.320128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.320311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.320344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.320462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.320494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.320628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.320661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.320920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.320955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.321085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.321118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.321250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.321283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.321549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.321581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.321698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.321732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.321851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.321884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.322171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.322203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.322391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.322423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.322554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.322587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.322714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.322747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.322948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.322987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.323173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.323206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.475 [2024-12-15 13:16:08.323342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.475 [2024-12-15 13:16:08.323374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.475 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.323552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.323585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.323783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.323817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.324019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.324052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.324160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.324193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.324377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.324410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.324715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.324748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.324993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.325182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.325216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.325399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.325432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.325620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.325653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.325782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.325815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.326011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.326045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.326159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.326192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.326382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.326416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.326684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.326716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.326850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.326883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.327075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.327108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.327286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.327318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.327443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.327476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.327661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.327693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.327972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.328006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.328188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.328220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.754 [2024-12-15 13:16:08.328519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.754 qpair failed and we were unable to recover it. 00:36:00.754 [2024-12-15 13:16:08.328711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.328744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.328965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.329209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.329242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.329341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.329371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.329557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.329589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.329718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.329753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.329935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.329969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.330232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.330264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.330533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.330566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.330756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.330788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.330929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.330963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.331099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.331132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.331301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.331333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.331506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.331538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.331659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.331698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.331874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.331907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.332146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.332366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.332398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.332637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.332670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.332914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.332948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.333117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.333291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.333323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.333573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.333605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.333814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.333856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.334038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.334071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.334244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.334277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.334564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.334595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.334718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.334750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.334938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.334972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.335147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.335179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.335451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.335640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.335673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.335943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.335978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.336214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.336246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.336442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.336474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.336659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.336692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.336928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.336963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.337146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.755 [2024-12-15 13:16:08.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.755 qpair failed and we were unable to recover it. 00:36:00.755 [2024-12-15 13:16:08.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.337384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.337558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.337591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.337727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.337759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.338052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.338119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.338397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.338433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.338836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.338871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.338992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.339025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.339265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.339480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.339719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.339752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.339931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.339968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.340157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.340190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.340363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.340585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.340618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.340897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.340932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.341110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.341304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.341337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.341586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.341620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.341820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.341865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.342051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.342085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.342206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.342240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.342360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.342392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.342666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.342699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.342940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.342974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.343172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.343206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.343443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.343476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.343684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.343717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.343907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.343942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.344172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.344421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.344456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.344702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.344735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.344980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.345014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.345201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.345235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.345477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.345510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.345773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.345805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.346039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.756 qpair failed and we were unable to recover it. 00:36:00.756 [2024-12-15 13:16:08.346241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.756 [2024-12-15 13:16:08.346275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.346586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.346853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.346888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.347013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.347047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.347247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.347281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.347480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.347514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.347729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.348029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.348065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.348311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.348345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.348471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.348505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.348743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.348777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.349053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.349088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.349277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.349310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.349571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.349604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.349893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.349928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.350106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.350140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.350389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.350422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.350624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.350657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.350843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.350878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.351143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.351183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.351404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.351438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.351652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.351686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.351837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.351872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.352101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.352134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.352380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.352413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.352628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.352662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.352854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.352888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.353080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.353113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.353357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.353390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.353656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.353689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.757 [2024-12-15 13:16:08.353901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.757 [2024-12-15 13:16:08.353936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.757 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.354176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.354373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.354589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.354844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.354879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.355080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.355114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.355381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.355415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.355666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.355700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.355873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.356110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.356143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.356396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.356431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.356700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.356735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.356911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.356947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.357214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.357249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.357378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.357412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.357599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.357632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.357823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.357874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.358121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.358157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.358404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.358437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.358569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.358603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.358867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.358903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.359089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.359122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.359325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.359365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.359605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.359639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.359897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.359933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.360216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.360250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.360423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.360457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.360639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.360674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.360936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.360971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.361269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.361506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.361539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.361733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.361766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.362020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.758 [2024-12-15 13:16:08.362055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.758 qpair failed and we were unable to recover it. 00:36:00.758 [2024-12-15 13:16:08.362232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.362267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.362482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.362664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.362699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.362899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.362936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.363065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.363099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.363296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.363330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.363505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.363538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.363711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.363745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.363912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.363950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.364131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.364165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.364380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.364413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.364531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.364565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.364844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.364878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.365069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.365101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.365278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.365312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.365487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.365521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.365696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.365730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.365918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.365954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.366201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.366235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.366417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.366451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.366580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.366613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.366872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.366908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.367100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.367135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.367465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.367539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.367809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.367894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.368144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.368181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.368457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.368660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.368693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.368945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.368980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.369195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.369228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.369471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.369506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.369662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.369887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.759 [2024-12-15 13:16:08.369922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.759 qpair failed and we were unable to recover it. 00:36:00.759 [2024-12-15 13:16:08.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.370190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.370390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.370424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.370629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.370662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.370909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.370953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.371150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.371185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.371372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.371405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.371617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.371866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.371902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.372096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.372130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.372303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.372336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.372521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.372556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.372795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.372842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.373032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.373065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.373186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.373220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.373419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.373454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.373679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.373809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.373852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.374053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.374088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.374304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.374336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.374522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.374557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.374661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.374695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.374870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.374904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.375111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.375325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.375359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.375533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.375565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.375759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.375791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.376011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.376046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.376319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.376351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.376476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.376508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.376652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.376686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.376861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.376901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.377101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.377134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.377324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.377359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.377582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.377700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.377733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.377982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.378019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.378201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.378234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.378368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.760 [2024-12-15 13:16:08.378401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.760 qpair failed and we were unable to recover it. 00:36:00.760 [2024-12-15 13:16:08.378640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.378673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.378872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.378907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.379125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.379158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.379284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.379603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.379638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.379855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.379890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.380190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.380226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.380356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.380389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.380495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.380528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.380738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.380771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.381905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.381940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.382193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.382225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.382526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.382736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.382769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.383038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.383073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.383250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.383282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.383491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.383525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.383720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.383753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.383876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.383911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.384168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.384201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.384324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.384357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.384472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.384504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.384688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.384721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.384982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.385146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.385498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.385720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.385949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.385984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.386172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.386206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.386466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.386499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.386685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.386718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.386930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.386966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.761 [2024-12-15 13:16:08.387163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.761 [2024-12-15 13:16:08.387196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.761 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.387336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.387370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.387547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.387580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.387698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.387732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.387863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.387898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.388108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.388141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.388382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.388415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.388532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.388566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.388784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.388817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.388945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.388980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.389102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.389135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.389350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.389383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.389574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.389607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.389717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.389750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.390025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.390059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.390239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.390272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.390485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.390517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.390629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.390671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.390878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.391069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.391102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.391287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.391321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.391583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.391617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.391798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.391853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.392096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.392130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.392318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.392351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.392548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.392582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.392754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.392788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.392983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.393017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.393198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.393232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.393403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.393436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.393618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.393650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.393888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.393923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.394125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.394159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.394343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.394376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.394551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.394590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.394861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.394895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.395082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.762 [2024-12-15 13:16:08.395243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.762 [2024-12-15 13:16:08.395275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.762 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.395457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.395491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.395688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.395721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.395935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.395969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.396117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.396294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.396326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.396490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.396670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.396702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.396892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.396926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.397045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.397077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.397285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.397317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.397499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.397532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.397796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.397839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.398083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.398117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.398411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.398444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.398681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.398715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.398954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.398989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.399120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.399153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.399432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.399465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.399642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.399675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.399872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.399906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.400034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.400069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.400357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.400390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.400576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.400609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.400797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.400840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.401033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.401066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.401309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.401342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.401460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.401493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.401716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.401887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.401922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.402050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.402083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.402210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.402243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.402418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.402451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.763 [2024-12-15 13:16:08.402690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.763 [2024-12-15 13:16:08.402724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.763 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.402946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.402979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.403090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.403121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.403427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.403460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.403744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.403787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.404001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.404036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.404155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.404187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.404376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.404409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.404664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.404698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.404891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.404925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.405066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.405169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.405375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.405408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.405542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.405574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.405808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.405854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.406030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.406063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.406249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.406280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.406471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.406505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.406701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.406733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.406916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.406967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.407090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.407389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.407422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.407697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.407730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.407940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.407975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.408096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.408128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.408248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.408281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.408470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.408675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.408708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.408880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.408913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.409153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.409186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.409453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.409486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.409735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.409768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.409984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.410019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.410133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.410166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.410372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.410405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.410595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.410628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.410820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.764 [2024-12-15 13:16:08.410864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.764 qpair failed and we were unable to recover it. 00:36:00.764 [2024-12-15 13:16:08.411115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.411416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.411448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.411633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.411667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.411787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.411820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.412052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.412084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.412272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.412304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.412566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.412599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.412851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.412988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.413142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.413371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.413517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.413725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.413944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.413979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.414109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.414324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.414356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.414531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.414565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.414682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.414715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.414956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.414989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.415117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.415149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.415427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.415460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.415583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.415616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.415865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.415900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.416166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.416199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.416443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.416476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.416688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.416720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.416897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.416930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.417201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.417233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.417436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.417468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.417601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.417634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.417872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.417907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.418147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.418179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.418350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.418383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.418566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.418598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.418789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.418822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.419096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.419129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.419303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.419335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.419518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.419550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.765 [2024-12-15 13:16:08.419802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.765 [2024-12-15 13:16:08.419844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.765 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.420109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.420141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.420269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.420301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.420478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.420511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.420647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.420679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.420813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.420856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.421062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.421287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.421505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.421668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.421810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.421997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.422030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.422206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.422238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.422462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.422495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.422669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.422701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.422811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.422853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.422976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.423009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.423182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.423215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.423395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.423427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.423596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.423629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.423763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.424059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.424093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.424330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.424363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.424509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.424541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.424747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.424780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.424911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.424945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.425144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.425177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.425418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.425450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.425628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.425662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.425786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.425818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.425982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.426016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.426188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.426221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.426413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.426445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.426659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.426691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.426884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.426920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.427096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.427129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.427309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.427342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.427598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.427630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.427778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.427810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.428062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.766 [2024-12-15 13:16:08.428095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.766 qpair failed and we were unable to recover it. 00:36:00.766 [2024-12-15 13:16:08.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.428376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.428574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.428607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.428729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.428762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.429026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.429061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.429210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.429243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.429444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.429477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.429658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.429691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.429963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.429997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.430136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.430169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.430310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.430565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.430599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.430704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.430736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.430859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.430893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.431075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.431108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.431353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.431386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.431571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.431604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.431865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.431899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.432085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.432226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.432368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.432539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.432968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.433002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.433291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.433325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.433547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.433720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.433881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.433915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.434165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.434379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.434413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.434588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.434620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.434867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.434902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.435013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.435046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.435223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.435255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.435465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.767 [2024-12-15 13:16:08.435498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.767 qpair failed and we were unable to recover it. 00:36:00.767 [2024-12-15 13:16:08.435697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.435730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.435924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.435958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.436080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.436114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.436303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.436335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.436485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.436517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.436707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.436740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.436870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.436904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.437165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.437198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.437377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.437409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.437534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.437567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.437804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.437845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.437963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.437996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.438207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.438404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.438437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.438573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.438606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.438798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.438848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.439116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.439148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.439419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.439546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.439579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.439704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.439736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.439916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.439950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.440126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.440158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.440370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.440544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.440576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.440787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.440820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.441026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.441059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.441239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.441272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.441478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.441511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.441728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.441761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.442035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.442070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.442305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.442625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.442658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.442866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.442900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.443073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.768 [2024-12-15 13:16:08.443105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.768 qpair failed and we were unable to recover it. 00:36:00.768 [2024-12-15 13:16:08.443224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.443256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.443429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.443462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.443653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.443685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.443945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.443981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.444169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.444201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.444317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.444350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.444477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.444509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.444773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.444805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.445053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.445126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.445372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.445409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.445544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.445577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.445772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.445806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.446006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.446040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.446218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.446250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.446437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.446651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.446682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.446870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.446905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.447152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.447187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.447360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.447392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.447571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.447605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.447737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.447770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.448078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.448305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.448456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.448661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.448805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.448998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.449033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.449163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.449195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.449437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.449471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.449590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.449624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.449866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.449901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.450856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.450890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.769 [2024-12-15 13:16:08.451071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.769 [2024-12-15 13:16:08.451104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.769 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.451347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.451381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.451564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.451597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.451780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.451813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.451941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.451975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.452145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.452178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.452300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.452332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.452518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.452552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.452806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.452850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.453034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.453067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.453172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.453205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.453322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.453355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.453639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.453815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.453861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.454000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.454035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.454297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.454329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.454577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.454609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.454744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.454776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.454972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.455006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.455197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.455230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.455419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.455452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.455631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.455664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.455782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.455815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.456040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.456074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.456273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.456307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.456518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.456556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.456745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.456778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.456925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.456960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.457157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.457190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.457362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.457394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.457571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.457604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.457786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.457819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.458065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.458362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.458395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.458582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.458616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.458814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.458858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.458986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.459020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.770 qpair failed and we were unable to recover it. 00:36:00.770 [2024-12-15 13:16:08.459292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.770 [2024-12-15 13:16:08.459325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.459519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.459552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.459745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.459968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.460115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.460337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.460569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.460793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.460957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.460991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.461166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.461199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.461416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.461450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.461570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.461602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.461736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.461769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.461955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.462166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.462200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.462321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.462359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.462606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.462640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.462839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.462874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.463143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.463176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.463438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.463471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.463661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.463694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.463868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.463903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.464171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.464203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.464385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.464419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.464625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.464658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.464877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.464912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.465113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.465147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.465320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.465354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.465473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.465506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.465703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.465736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.465914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.465949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.466192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.466225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.466364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.466398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.466552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.466726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.466759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.466941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.467151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.467184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.467294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.467326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.467440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.467472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.467662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.771 [2024-12-15 13:16:08.467695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.771 qpair failed and we were unable to recover it. 00:36:00.771 [2024-12-15 13:16:08.467972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.468008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.468193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.468227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.468432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.468466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.468671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.468705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.468838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.468872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.469083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.469115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.469241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.469274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.469517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.469550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.469787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.469821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.469948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.469981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.470157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.470189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.470427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.470460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.470580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.470613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.470853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.470887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.471092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.471126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.471307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.471341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.471516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.471550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.471722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.471756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.471878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.471912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.472171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.472377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.472410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.472537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.472570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.472699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.472732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.472917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.472952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.473224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.473257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.473395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.473428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.473558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.473591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.473771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.474081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.474248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.474281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.474549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.474721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.474753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.475008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.475043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.475266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.475299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.475513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.475547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.772 qpair failed and we were unable to recover it. 00:36:00.772 [2024-12-15 13:16:08.475716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.772 [2024-12-15 13:16:08.475750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.476015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.476050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.476236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.476270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.476464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.476498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.476686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.476891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.476926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.477109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.477142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.477247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.477281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.477471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.477510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.477751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.477785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.478033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.478068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.478241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.478273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.478393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.478593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.478626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.478747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.478780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.479002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.479036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.479255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.479287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.479463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.479496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.479686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.479974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.480009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.480126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.480159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.480393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.480589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.480622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.480794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.480856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.480969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.481141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.481438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.481579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.481721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.481862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.481898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.482002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.773 [2024-12-15 13:16:08.482035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.773 qpair failed and we were unable to recover it. 00:36:00.773 [2024-12-15 13:16:08.482276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.482310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.482498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.482531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.482683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.482876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.483113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.483151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.483341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.483374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.483544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.483578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.483699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.483732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.483938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.484082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.484115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.484329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.484505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.484538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.484664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.484697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.484962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.484996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.485111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.485144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.485354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.485387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.485583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.485615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.485791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.486008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.486042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.486311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.486344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.486607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.486640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.486759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.486792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.486979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.487012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.487205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.487238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.487503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.487537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.487759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.487995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.488029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.488155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.488473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.488505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.488622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.488655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.488860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.488894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.489088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.489127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.489304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.489337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.489459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.489492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.489665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.489697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.489886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.489921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.490091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.490123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.490255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.490288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.490424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.774 [2024-12-15 13:16:08.490456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.774 qpair failed and we were unable to recover it. 00:36:00.774 [2024-12-15 13:16:08.490653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.490686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.490950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.490985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.491295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.491465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.491695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.491972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.492191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.492365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.492514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.492887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.492922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.493107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.493141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.493316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.493349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.493477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.493510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.493685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.493717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.493844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.493878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.494073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.494106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.494287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.494320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.494455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.494488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.494667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.494700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.494818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.494861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.495123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.495157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.495398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.495430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.495557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.495589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.495779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.495812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.496013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.496047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.496164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.496195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.496367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.496399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.496542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.496574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.496776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.496809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.497081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.497116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.497370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.497732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.775 [2024-12-15 13:16:08.497770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.775 qpair failed and we were unable to recover it. 00:36:00.775 [2024-12-15 13:16:08.498052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.498089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.498227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.498260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.498433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.498466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.498732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.498766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.498981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.499017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.499199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.499231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.499437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.499469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.499700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.499733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.499969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.500005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.500281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.500313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.500504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.500538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.500731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.500765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.500979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.501015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.501252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.501286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.501531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.501564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.501763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.501795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.501941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.501974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.502217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.502249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.502424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.502457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.502632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.502666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.502846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.502881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.503083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.503116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.503299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.503333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.503521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.503554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.503846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.503880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.504152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.504187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.504378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.504411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.504535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.504568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.504760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.505063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.505098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.505379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.505604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.505637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.505845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.505879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.506090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.506329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.506362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.776 qpair failed and we were unable to recover it. 00:36:00.776 [2024-12-15 13:16:08.506551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.776 [2024-12-15 13:16:08.506584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.506779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.506813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.507021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.507054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.507237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.507515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.507550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.507758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.507868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.507903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.508034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.508067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.508259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.508291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.508477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.508509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.508688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.508721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.508858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.508892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.509097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.509129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.509248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.509281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.509453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.509486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.509694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.509726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.509910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.509943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.510142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.510175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.510424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.510457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.510580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.510725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.510758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.510945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.511221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.511253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.511431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.511463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.511674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.511707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.511853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.511887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.512001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.512033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.512380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.512413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.512585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.512617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.512865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.512901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.513015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.513047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.513189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.513222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.513347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.513379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.777 qpair failed and we were unable to recover it. 00:36:00.777 [2024-12-15 13:16:08.513548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.777 [2024-12-15 13:16:08.513581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.513806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.513848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.513976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.514009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.514112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.514145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.514414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.514446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.514683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.514716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.514907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.514941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.515083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.515115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.515292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.515325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.515563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.515602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.515787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.515820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.516005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.516038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.516228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.516260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.516435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.516468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.516651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.516684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.516817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.516874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.517140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.517173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.517432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.517464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.517691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.517913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.517948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.518187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.518220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.518343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.518376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.518669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.518901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.519157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.519190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.519362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.519394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.519498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.519530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.519719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.519752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.519945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.519980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.520176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.520208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.520390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.520422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.520557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.520589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.520726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.520759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.520892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.520927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.521062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.521095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.521267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.521300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.521443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.521476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.778 [2024-12-15 13:16:08.521720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.778 [2024-12-15 13:16:08.521753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.778 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.521881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.521916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.522143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.522175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.522288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.522322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.522566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.522598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.522709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.522742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.522919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.522953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.523226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.523260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.523453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.523486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.523619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.523916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.523951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.524195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.524227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.524351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.524389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.524514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.524547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.524721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.524753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.524961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.524995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.525240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.525272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.525513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.525546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.525810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.525853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.526093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.526126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.526391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.526424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.526668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.526700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.526892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.526927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.527038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.527071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.527333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.527366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.527551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.527583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.527767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.527800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.528137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.528335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.528367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.528605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.528638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.528807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.528849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.528968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.529001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.529268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.529300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.529566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.529599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.529867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.529901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.530074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.530108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.530295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.779 [2024-12-15 13:16:08.530569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.779 qpair failed and we were unable to recover it. 00:36:00.779 [2024-12-15 13:16:08.530749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.530781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.531008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.531044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.531184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.531216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.531423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.531610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.531643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.531779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.531811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.532030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.532063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.532240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.532401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.532434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.532693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.532726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.532910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.532945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.533143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.533175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.533446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.533581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.533614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.533823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.533869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.534000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.534033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.534217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.534249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.534490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.534522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.534707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.534739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.534861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.534895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.535082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.535114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.535295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.535328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.535452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.535485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.535657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.535690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.535955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.535989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.536255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.536288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.536461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.536493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.536697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.536730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.536926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.536962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.537148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.537181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.537364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.537397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.537585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.537618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.537811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.537852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.538041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.538074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.538262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.538295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.538485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.538632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.538664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.538778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.538811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.780 [2024-12-15 13:16:08.539060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.780 [2024-12-15 13:16:08.539093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.780 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.539331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.539364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.539536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.539569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.539689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.539722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.539865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.539900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.540085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.540119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.540302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.540599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.540631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.540846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.540881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.540998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.541030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.541203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.541236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.541363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.541395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.541585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.541617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.541879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.542118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.542150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.542358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.542390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.542676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.542715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.542821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.542865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.543051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.543084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.543207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.543241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.543433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.543465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.543728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.543760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.543953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.543988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.544170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.544203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.544465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.544498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.544606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.544636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.544876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.545085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.545117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.545408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.545440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.545704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.545737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.545876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.545911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.546014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.546045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.546161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.546193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.546320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.546353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.546547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.546580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.546833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.547099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.547132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.781 [2024-12-15 13:16:08.547397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.781 [2024-12-15 13:16:08.547429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.781 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.547646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.547679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.547810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.548060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.548092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.548278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.548310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.548526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.548558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.548779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.548812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.548960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.548995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.549279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.549312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.549554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.549739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.549773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.549970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.550005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.550186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.550219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.550349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.550382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.550610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.550643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.550820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.550864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.550994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.551026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.551237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.551269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.551387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.551420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.551608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.551646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.551819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.551873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.552014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.552047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.552230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.552263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.552448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.552481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.552719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.552752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.552992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.553027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.553206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.553238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.553424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.553457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.553738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.553948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.553982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.554170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.554203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.554444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.554476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.554608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.554642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.554892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.554927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.782 [2024-12-15 13:16:08.555049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.782 [2024-12-15 13:16:08.555083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.782 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.555267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.555300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.555489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.555522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.555720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.555752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.555993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.556029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.556288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.556321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.556530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.556563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.556757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.556790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.556912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.556946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.557211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.557244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.557432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.557466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.557566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.557598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.557869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.557905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.558096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.558128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.558317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.558464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.558497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.558617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.558887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.558921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.559187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.559219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.559340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.559376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.559567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.559598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.559784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.559816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.560040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.560075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.560246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.560280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.560517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.560550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.560691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.560730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.560999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.561033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.561155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.561187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.561297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.561594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.561627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.561797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.561841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.562015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.562047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.562238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.562271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.562456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.562489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.562661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.562694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.562814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.562859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.563072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.563105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.563312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.563345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.563529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.563561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.563776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.783 [2024-12-15 13:16:08.563810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.783 qpair failed and we were unable to recover it. 00:36:00.783 [2024-12-15 13:16:08.564002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.564035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.564204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.564236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.564479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.564511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.564614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.564647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.564911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.564945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.565237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.565270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.565470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.565503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.565749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.565988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.566260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.566294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.566397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.566428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.566565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.566598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.566788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.566822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.567004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.567037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.567231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.567264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.567392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.567424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.567556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.567589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.567810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.567862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.568033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.568066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.568243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.568275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.568523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.568556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.568763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.568796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.569071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.569247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.569279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.569451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.569484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.569639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.569778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.570007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.570040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.570298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.570330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.570567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.570598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.570718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.570751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.571047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.571327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.571359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.571492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.571523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.571787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.571821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1208035 Killed "${NVMF_APP[@]}" "$@" 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.571969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.572003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.572303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.784 [2024-12-15 13:16:08.572495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.784 [2024-12-15 13:16:08.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.784 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:00.784 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.572775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.572808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.572939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:00.785 [2024-12-15 13:16:08.572973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.573153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.573187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:00.785 [2024-12-15 13:16:08.573426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.573459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.785 [2024-12-15 13:16:08.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.573729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.785 [2024-12-15 13:16:08.573967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.574001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.574131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.574163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.574290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.574323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.574511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.574544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.574812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.574856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.575041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.575073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.575197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.575236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.575363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.575395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.575584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.575617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.575898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.575932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.576139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.576283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.576430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.576652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.576867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.576990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.577023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.577205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.577238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.577432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.577466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.577633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.577664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.577851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.578099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.578131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.578312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.578344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.578538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.578570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.578810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.578849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.579023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.579056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.579239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.579272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.579537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.579570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.579780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.580002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.580037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.580149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.580183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 [2024-12-15 13:16:08.580392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.580425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.785 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1208731 00:36:00.785 [2024-12-15 13:16:08.580693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.785 [2024-12-15 13:16:08.580730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.785 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.580961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1208731 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:00.786 [2024-12-15 13:16:08.581187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.581224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1208731 ']' 00:36:00.786 [2024-12-15 13:16:08.581454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.581492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.581682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.786 [2024-12-15 13:16:08.581716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.581909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.581958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.786 [2024-12-15 13:16:08.582200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.582236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.786 [2024-12-15 13:16:08.582415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.786 [2024-12-15 13:16:08.582662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.582699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.582890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.786 [2024-12-15 13:16:08.582926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.583199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.583232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.583455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.583490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.583603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.583634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.583836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.583870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.584104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.584137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.584325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.584358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.584535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.584569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.584775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.584808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.585031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.585209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.585334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.585368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.585565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.585598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.585847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.585883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.586068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.586100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.586352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.586424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.586696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.586734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.587001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.587038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.587214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.587248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.587487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.587686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.587720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.587934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.588069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.588102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.786 [2024-12-15 13:16:08.588291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.786 [2024-12-15 13:16:08.588325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.786 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.588602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.588634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.588781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.589051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.589196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.589372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.589606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.589867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.589984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.590018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.590260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.590295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.590471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.590506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.590623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.590658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.590896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.590932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.591055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.591090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.591329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.591480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.591514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.591637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.591670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.591876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.591912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.592112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.592146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.592326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.592363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.592623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.592656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.592854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.592889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.592997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.593032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.593303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.593532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.593565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.593810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.593856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.593985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.594200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.594234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.594426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.594460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.594641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.594677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.594807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.594852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.595037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.595070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.595317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.595391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.595664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.595705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.595877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.595913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.596037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.596070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.596267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.787 [2024-12-15 13:16:08.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.787 qpair failed and we were unable to recover it. 00:36:00.787 [2024-12-15 13:16:08.596554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.596587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.596767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.596799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.597105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.597331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.597639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.597789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.597978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.598012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.598205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.598249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.598515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.598551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.598671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.598703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.598936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.598971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.599187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.599220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.599397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.599430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.599627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.599662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.599786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.599819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.600024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.600059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.600244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.600279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.600511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.600683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.600715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.600892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.600929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.601172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.601208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.601429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.601469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.601678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.601713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.601837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.601871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.602099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.602326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.602534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.602692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.602863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.602976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.603010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.603123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.603158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.603352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.603384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.603575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.603609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.603740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.603775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.604006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.604080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.604304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.788 [2024-12-15 13:16:08.604341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.788 qpair failed and we were unable to recover it. 00:36:00.788 [2024-12-15 13:16:08.604467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.604501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.604717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.604750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.604962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.604998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.605240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.605273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.605542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.605582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.605769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.605802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.605948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.605982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.606220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.606253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.606468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.606500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.606643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.606678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.606861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.606896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.607071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.607115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.607236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.607273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.607552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.607588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.607789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.607843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.608123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.608301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.608527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.608561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.608758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.608791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.608937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.609142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.609175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.609359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.609392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.609588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.609621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.609861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.609896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.610090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.610123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.610330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.610363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.610535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.610743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.610956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.610990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.611132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.611164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.611319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.611433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.611465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.611642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.611675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.611948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.612095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.612215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.612247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.612374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.612407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.612628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.612661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.612910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.789 [2024-12-15 13:16:08.612952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.789 qpair failed and we were unable to recover it. 00:36:00.789 [2024-12-15 13:16:08.613131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.613166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.613356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.613388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.613514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.613548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.613793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.614092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.614128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.614380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.614414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.614627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.614661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.614956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.614992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.615184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.615218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.615405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.615655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.615690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.615904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.615946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.616222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.616265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.616451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.616489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.616684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.616718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.617002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.617038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.617222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.617258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.617502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.617537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.617722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.617765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.617961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.618001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.618183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.618225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.618436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.618473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.618741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.618778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.618961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.618996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.619232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.619429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.619462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.619681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.619722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.619924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.619959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.620132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.620165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.620445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.620478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.620667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.620702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.620950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.620986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.621258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.621292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.621541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.621580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.621770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.621805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.622012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.622047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.622251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.622286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.622497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.790 [2024-12-15 13:16:08.622532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.790 qpair failed and we were unable to recover it. 00:36:00.790 [2024-12-15 13:16:08.622718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.622754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.623004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.623042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.623341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.623530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.623563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.623774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.623807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.624041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.624245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.624395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.624535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.624694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.624967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.625001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.625249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.625282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.625416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.625449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.625633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.625664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.625849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.625892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.626127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.626159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.626344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.626480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.626511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.626754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.626786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.626981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.627015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.627190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.627222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.627395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.627427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.627611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.627643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.627749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.627780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.628035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.628070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.628346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.628606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.628639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.628774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.628807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.629006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.629039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.629299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.629332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.629536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.629569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.629772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.629804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.629923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.630066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.791 [2024-12-15 13:16:08.630098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.791 qpair failed and we were unable to recover it. 00:36:00.791 [2024-12-15 13:16:08.630284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.630317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.630562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.630594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.630855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.630888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.631079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.631112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.631301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.631333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.631463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.631495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.631674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.631707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.631924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.631967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.632146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.632184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.632311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.632346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.632478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.632510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.632708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.632745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.633007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.633044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.633233] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:00.792 [2024-12-15 13:16:08.633283] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.792 [2024-12-15 13:16:08.633287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.633325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.633590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.633622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.633754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.633785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.634051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.634084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.634272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.634306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.634514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.634548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.634685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.634725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.634921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.634956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.635089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.635124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.635259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.635294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.635522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.635557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.635808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.636018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.636054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.636209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.636416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.636468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.636663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.636706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.636848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.636890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.637162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.637200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.637383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.637423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.637624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.637658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.637801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.637848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.638031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.638064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.638185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.638216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.638354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.638387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.792 qpair failed and we were unable to recover it. 00:36:00.792 [2024-12-15 13:16:08.638641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.792 [2024-12-15 13:16:08.638676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.638920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.639147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.639180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.639433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.639466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.639611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.639644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.639911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.640212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.640247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:00.793 [2024-12-15 13:16:08.640449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:00.793 [2024-12-15 13:16:08.640484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:00.793 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.640721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.640845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.640880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.641079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.641113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.641283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.641316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.641495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.641530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.641735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.641959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.641994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.642131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.642164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.642361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.642395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.642515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.642548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.642736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.642770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.642972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.643008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.643131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.643165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.643340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.643373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.643589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.643750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.643783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.643978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.644014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.070 [2024-12-15 13:16:08.644203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.070 [2024-12-15 13:16:08.644237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.070 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.644425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.644715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.644749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.644876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.644913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.645030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.645064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.645264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.645298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.645473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.645506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.645645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.645679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.645952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.645987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.646156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.646190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.646384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.646419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.646650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.646918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.646954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.647172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.647207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.647331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.647366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.647552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.647586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.647789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.647823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.648049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.648085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.648314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.648566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.648601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.648793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.648840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.649016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.649050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.649252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.649286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.649522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.649556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.649682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.649716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.649927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.649963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.650081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.650115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.650320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.650354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.650540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.650575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.650759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.650793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.651056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.651131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.651348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.651384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.651629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.651663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.651801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.651852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.651980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.652014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.652231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.652264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.652380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.652413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.652543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.652767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.652801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.071 [2024-12-15 13:16:08.653025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.071 [2024-12-15 13:16:08.653058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.071 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.653239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.653272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.653451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.653484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.653691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.653857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.653892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.654072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.654105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.654293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.654327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.654516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.654705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.654905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.654940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.655149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.655182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.655394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.655427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.655610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.655644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.655914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.655949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.656123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.656157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.656364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.656396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.656593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.656627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.656797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.656843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.657044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.657077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.657297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.657331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.657478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.657512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.657712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.657991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.658027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.658219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.658252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.658438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.658471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.658750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.658823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.659036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.659379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.659418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.659661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.659695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.659885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.659919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.660042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.660076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.660200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.660233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.660427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.660461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.660703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.660737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.660981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.661017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.661199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.661233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.661394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.661429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.661678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.661712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.661931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.662137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.072 [2024-12-15 13:16:08.662179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.072 qpair failed and we were unable to recover it. 00:36:01.072 [2024-12-15 13:16:08.662447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.662480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.662688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.662722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.662904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.662951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.663101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.663134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.663375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.663408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.663690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.663724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.663920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.663955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.664191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.664224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.664464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.664497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.664683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.664717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.664892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.664927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.665143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.665176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.665428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.665619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.665653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.665919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.665953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.666138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.666171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.666344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.666377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.666549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.666582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.666823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.666869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.667961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.667995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.668117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.668154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.668343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.668376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.668494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.668527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.668773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.669063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.669095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.669309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.669341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.669534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.669566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.669690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.669722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.669987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.670022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.670145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.670177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.670348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.670381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.670666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.670855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.670889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.671130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.671171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.073 qpair failed and we were unable to recover it. 00:36:01.073 [2024-12-15 13:16:08.671381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.073 [2024-12-15 13:16:08.671414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.671716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.671750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.671864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.672211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.672454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.672486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.672696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.672729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.672873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.672906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.673084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.673116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.673236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.673269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.673474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.673507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.673690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.673722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.673865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.674010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.674042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.674302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.674335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.674457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.674490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.674606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.674639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.674841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.674876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.675081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.675250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.675283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.675489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.675521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.675760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.675793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.675859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2207c70 (9): Bad file descriptor 00:36:01.074 [2024-12-15 13:16:08.676061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.676098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.676303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.676336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.676573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.676606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.676816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.676876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.677064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.677098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.677235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.677400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.677433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.677670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.677703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.677876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.677912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.678203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.678375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.678409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.678533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.678567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.678834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.074 [2024-12-15 13:16:08.678869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.074 qpair failed and we were unable to recover it. 00:36:01.074 [2024-12-15 13:16:08.679061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.679094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.679268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.679301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.679490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.679524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.679640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.679672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.679939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.679974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.680189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.680224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.680462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.680495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.680676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.680709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.680950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.680984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.681101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.681133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.681385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.681419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.681608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.681642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.681744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.681777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.681980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.682015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.682251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.682284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.682552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.682585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.682785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.682818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.683003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.683036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.683220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.683259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.683522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.683555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.683791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.683836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.684077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.684111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.684251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.684284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.684584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.684847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.684882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.685000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.685033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.685171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.685205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.685396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.685429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.685566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.685599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.685808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.685850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.686081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.686117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.686335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.686368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.686499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.686533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.686661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.686693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.686903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.686938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.687067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.687100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.687223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.687256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.687368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.687402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.075 [2024-12-15 13:16:08.687603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.075 [2024-12-15 13:16:08.687637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.075 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.687898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.687934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.688134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.688167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.688355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.688388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.688578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.688611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.688816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.688868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.689087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.689119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.689341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.689374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.689562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.689595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.689770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.689803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.690006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.690041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.690223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.690256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.690500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.690532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.690865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.691110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.691143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.691281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.691314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.691493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.691527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.691736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.691769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.691976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.692011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.692262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.692296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.692482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.692521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.692761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.692794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.693000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.693033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.693247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.693280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.693539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.693572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.693706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.693738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.694010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.694046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.694267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.694300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.694509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.694542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.694673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.694706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.694888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.694923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.695027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.695060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.695197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.695231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.695347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.695379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.695624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.695657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.695880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.696075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.696108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.696243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.696276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.696518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.696550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.696765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.076 [2024-12-15 13:16:08.696798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.076 qpair failed and we were unable to recover it. 00:36:01.076 [2024-12-15 13:16:08.696991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.697025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.697203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.697236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.697419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.697576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.697609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.697779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.697811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.697999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.698034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.698324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.698358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.698492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.698525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.698797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.698841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.699055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.699088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.699313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.699346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.699623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.699656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.699909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.699945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.700083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.700115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.700307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.700340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.700531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.700564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.700834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.700868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.700999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.701032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.701150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.701184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.701386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.701418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.701601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.701640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.701813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.701867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.702073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.702106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.702242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.702275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.702459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.702492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.702737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.702770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.703064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.703098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.703289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.703321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.703563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.703597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.703712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.703745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.703938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.704221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.704254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.704443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.704735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.704767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.704919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.704954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.705138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.705172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.705292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.705325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.705503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.705536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.705706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.077 [2024-12-15 13:16:08.705730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.077 [2024-12-15 13:16:08.705762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.077 qpair failed and we were unable to recover it. 00:36:01.077 [2024-12-15 13:16:08.705967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.706258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.706291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.706480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.706514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.706751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.706785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.707010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.707045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.707259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.707289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.707482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.707513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.707782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.707813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.707960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.707991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.708096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.708126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.708293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.708533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.708810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.708851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.709096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.709128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.709241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.709271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.709453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.709483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.709678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.709709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.709964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.709996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.710193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.710224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.710429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.710461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.710679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.710711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.710948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.711003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.711163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.711197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.711378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.711407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.711535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.711565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.711810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.711856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.712043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.712072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.712197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.712228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.712364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.712394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.712635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.712667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.712877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.712910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.713089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.713303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.713335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.713538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.713724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.713758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.713909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.713945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.714065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.714098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.714307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.714493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.714526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.714766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.078 [2024-12-15 13:16:08.714800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.078 qpair failed and we were unable to recover it. 00:36:01.078 [2024-12-15 13:16:08.715003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.715036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.715225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.715258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.715523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.715558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.715770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.715803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.716005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.716040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.716213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.716352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.716384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.716652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.716686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.716801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.716854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.717062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.717235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.717268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.717530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.717563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.717735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.717767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.717900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.717934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.718061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.718094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.718212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.718244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.718482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.718515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.718772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.718804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.718927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.718961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.719242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.719275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.719516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.719549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.719668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.719701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.719950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.719985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.720112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.720144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.720362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.720395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.720566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.720599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.720768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.720800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.720990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.721024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.721205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.721237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.721428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.721460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.721583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.721615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.721804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.721843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.722021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.722053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.722323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.722355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.079 [2024-12-15 13:16:08.722538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.079 [2024-12-15 13:16:08.722571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.079 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.722703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.722741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.723008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.723043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.723254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.723286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.723460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.723492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.723673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.723706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.723910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.723944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.724123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.724155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.724393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.724425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.724616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.724649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.724772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.724804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.724938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.724972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.725150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.725184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.725368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.725401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.725644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.725876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.725912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.726166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.726202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.726414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.726446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.726587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.726619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.726884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.726920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.727065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.727236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.727270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.727445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.727479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.727585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.727617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.727839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.727875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.728105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.728138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.728326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.728359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.728377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.080 [2024-12-15 13:16:08.728410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.080 [2024-12-15 13:16:08.728417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.080 [2024-12-15 13:16:08.728424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.080 [2024-12-15 13:16:08.728432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.080 [2024-12-15 13:16:08.728574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.728604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.728800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.728851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:01.080 [2024-12-15 13:16:08.729797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:01.080 [2024-12-15 13:16:08.729983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.729986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:01.080 [2024-12-15 13:16:08.730014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.729987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:01.080 [2024-12-15 13:16:08.730301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.730334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.080 [2024-12-15 13:16:08.730550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.080 [2024-12-15 13:16:08.730582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.080 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.730854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.730890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.731066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.731100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.731289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.731322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.731436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.731468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.731638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.731671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.731863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.731896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.732162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.732195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.732371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.732402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.732573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.732604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.732781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.732813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.733072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.733105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.733296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.733328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.733576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.733608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.733722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.733754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.733886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.733919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.734102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.734141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.734339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.734373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.734562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.734595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.734745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.734982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.735018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.735262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.735295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.735479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.735512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.735624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.735656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.735840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.735872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.735992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.736024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.736209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.736243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.736503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.736535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.736743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.736775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.736915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.736949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.737167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.737200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.737449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.737663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.737696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.737934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.737968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.738214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.738246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.738459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.738493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.738754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.738787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.738923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.738956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.739120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.739297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.739331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.081 [2024-12-15 13:16:08.739523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.081 [2024-12-15 13:16:08.739556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.081 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.739739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.739771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.739969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.740002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.740156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.740382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.740415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.740536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.740569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.740767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.740799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.740991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.741136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.741295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.741506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.741754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.741923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.741958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.742151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.742183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.742291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.742323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.742453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.742486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.742761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.742933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.742996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.743339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.743390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.743688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.743723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.743966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.744004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.744137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.744170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.744297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.744330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.744522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.744555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.744785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.745063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.745099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.745232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.745265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.745531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.745565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.745684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.745717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.745848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.745882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.746011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.746185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.746218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.746389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.746422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.746561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.746594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.746769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.746803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.747006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.747041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.747223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.747258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.747507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.747540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.747729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.748027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.082 [2024-12-15 13:16:08.748063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.082 qpair failed and we were unable to recover it. 00:36:01.082 [2024-12-15 13:16:08.748354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.748388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.748594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.748766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.748799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.748989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.749024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.749282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.749316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.749507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.749541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.749783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.749816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.749964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.749999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.750107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.750142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.750446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.750480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.750665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.750698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.750926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.751196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.751229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.751416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.751450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.751699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.751732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.751961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.752139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.752172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae8000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.752309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.752354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.752486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.752521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.752787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.753922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.753958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.754101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.754334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.754367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.754542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.754576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.754789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.755017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.755061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.755305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.755340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.755478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.755514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.755709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.755744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.755950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.083 [2024-12-15 13:16:08.755987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.083 qpair failed and we were unable to recover it. 00:36:01.083 [2024-12-15 13:16:08.756280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.756315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.756454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.756489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.756663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.756697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.756814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.756859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.757067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.757102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.757289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.757323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.757454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.757489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.757621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.757656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.757855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.758091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.758126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.758319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.758355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.758475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.758510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.758696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.758731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.758853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.758894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.759079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.759116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.759297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.759333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.759483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.759517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.759777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.759818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.760016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.760052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.760239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.760274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.760529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.760563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.760752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.760789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.761048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.761117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.761349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.761394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.761576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.761757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.761788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.761980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.762014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.762211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.762243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.762420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.762451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.762686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.762718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.762860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.762894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.763038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.763249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.763280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.763483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.763514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.763691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.763723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.763895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.763930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.764129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.764162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.764348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.764380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.764648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.084 qpair failed and we were unable to recover it. 00:36:01.084 [2024-12-15 13:16:08.764793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.084 [2024-12-15 13:16:08.764834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.765145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.765384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.765655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.765688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.765950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.765985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.766167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.766200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.766465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.766497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.766742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.766774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.766918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.766953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.767154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.767185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.767357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.767394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.767690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.767899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.768166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.768199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.768359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.768548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.768579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.768855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.768890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.769071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.769294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.769327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.769517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.769549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.769808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.769850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.770027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.770060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.770325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.770357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.770616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.770649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.770836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.770871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.771046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.771079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.771259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.771290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.771480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.771512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.771693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.771726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.771846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.771881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.772153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.772185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.772353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.772385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.772500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.772532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.772775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.772808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.773107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.773140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.773415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.773448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.773626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.773661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.773797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.773853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.774040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.774071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.774273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.085 [2024-12-15 13:16:08.774304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.085 qpair failed and we were unable to recover it. 00:36:01.085 [2024-12-15 13:16:08.774502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.774535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.774789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.774835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.775014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.775048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.775291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.775323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.775559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.775593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.775879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.775915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.776179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.776212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.776500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.776533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.776717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.776751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.776957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.776992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.777248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.777281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.777531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.777566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.777839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.777873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.778116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.778150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.778393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.778427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.778684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.778718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.778897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.778932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.779177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.779212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.779458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.779495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.779708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.779741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.779956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.779992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.780140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.780176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.780353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.780387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.780580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.780614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.780811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.780862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.780980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.781014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.781278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.781311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.781455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.781489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.781603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.781635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.781835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.781870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.782099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.782134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.782326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.782359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.782597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.782633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.782769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.782804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.783080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.783116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.783312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.783346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.783611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.783645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.783887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.783925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.784174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.784210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.784505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.086 [2024-12-15 13:16:08.784541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.086 qpair failed and we were unable to recover it. 00:36:01.086 [2024-12-15 13:16:08.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.784877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.785132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.785165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.785346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.785379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.785575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.785609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.785812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.785868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.786064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.786098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.786386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.786420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.786541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.786575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.786844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.786878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.787168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.787202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.787406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.787439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.787707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.787741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.787987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.788024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.788469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.788503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.788740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.788773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.788898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.788932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.789054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.789088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.789326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.789359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.789597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.789630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.789891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.789957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.790248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.790358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.790392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.790652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.790684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.790881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.790915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.791165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.791517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.791782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.791812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.792089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.792118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.792292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.792321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.792571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.792599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.792878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.792908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.793074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.793102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.793302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.793330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.793557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.793585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.793840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.793871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.794080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.794107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.794275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.794303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.794477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.794514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.087 [2024-12-15 13:16:08.794688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.087 [2024-12-15 13:16:08.794717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.087 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.794968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.794998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.795226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.795254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.795421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.795625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.795652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.795847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.795876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.796129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.796157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.796320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.796600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.796633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.796846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.796881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.797140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.797173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.797385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.797418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.797664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.797697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.797891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.797935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.798214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.798331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.798364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.798625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.798659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.798843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.798878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.799116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.799149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.799324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.799357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.799563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.799597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.799863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.799898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.800155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.800189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.800333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.800366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.800626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.800913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.800948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.801145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.801182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.801443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.801477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.801743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.801776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.801970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.802005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.802257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.802290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.802527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.802560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.802885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.803072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.803105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.803395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.803429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.088 [2024-12-15 13:16:08.803713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.088 [2024-12-15 13:16:08.803747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.088 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.803954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.803989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.804200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.804232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.804513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.804547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.804822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.804873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.804984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.805016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.805279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.805313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.805574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.805608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.805735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.805768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.805976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.806012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.806277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.806311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.806425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.806457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.806715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.806748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.807011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.807047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.807333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.807366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.807666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.807918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.807953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.808146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.808180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.808425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.808459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.808739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.808773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.809051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.809085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.809296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.809330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.809545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.809578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.809836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.809872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.810093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.810127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.810407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.810640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.810961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.810998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.811295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.811541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.811575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.811693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.811727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.811967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.812230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.812262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.812506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.812540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.812836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.812872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.813057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.813090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.813267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.813567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.813601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.813811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.813868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.814085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.089 [2024-12-15 13:16:08.814117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.089 qpair failed and we were unable to recover it. 00:36:01.089 [2024-12-15 13:16:08.814302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.814336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.814597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.814632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.814845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.814880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.815070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.815104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.815281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.815314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.815516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.815550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.815737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.815771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.816047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.816082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.816274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.816306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.816562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.816596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.816845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.816879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.817053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.817087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.817325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.817541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.817574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.817746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.817779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.818079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.818114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.818313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.818347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.818469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.818503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.818690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.818724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.818976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.819252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.819411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.819445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.819702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.819737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.819930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.819965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.820157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.820191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.820375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.820408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.820658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.820692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.820945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.820981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.821211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.821450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.821484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.821668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.821701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.090 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.821896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.821939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.822061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:01.090 [2024-12-15 13:16:08.822093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.822288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.822321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:01.090 [2024-12-15 13:16:08.822609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.822642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.090 [2024-12-15 13:16:08.822846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.822882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.090 [2024-12-15 13:16:08.823126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.823159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.090 [2024-12-15 13:16:08.823295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.090 [2024-12-15 13:16:08.823328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.090 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.823456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.823488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.823751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.823785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.824046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.824083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.824323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.824356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.824529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.824563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.824861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.824897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.825161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.825199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.825473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.825510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.825657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.825691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.825868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.825905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.826050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.826082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.826272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.826304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.826504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.826536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.826721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.827000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.827292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.827518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.827834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.827868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.828113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.828269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.828642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.828867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.828990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.829022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.829236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.829419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.829452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.829575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.829607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.829780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.829813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.830070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.830104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.830346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.830380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.830578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.830611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.830853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.830889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.831148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.831180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.831368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.831402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.831611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.831644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.831890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.831924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.832067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.832103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.832247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.832281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.832479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.091 [2024-12-15 13:16:08.832510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.091 qpair failed and we were unable to recover it. 00:36:01.091 [2024-12-15 13:16:08.832688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.832722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.832947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.832982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.833168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.833203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.833341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.833377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.833620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.833654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.833908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.833943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.834156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.834196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.834382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.834416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.834605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.834638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.834841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.834874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.835063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.835094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.835284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.835318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.835597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.835629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.835878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.835915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.836091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.836227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.836260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.836452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.836486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.836729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.836760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.837040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.837075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.837270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.837302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.837593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.837626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.837746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.837778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.837953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.838270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.838444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.838475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.838662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.838694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.838948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.838982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.839125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.839157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.839330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.839364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.839618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.839651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.839940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.839976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.840119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.840151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.840334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.840366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.840610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.840648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.840843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.092 [2024-12-15 13:16:08.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.092 qpair failed and we were unable to recover it. 00:36:01.092 [2024-12-15 13:16:08.841017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.841049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.841241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.841277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.841480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.841512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.841705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.841739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.842009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.842043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.842189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.842223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.842415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.842448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.842654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.842685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.842918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.842953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.843131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.843162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.843305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.843336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.843623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.843656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.843867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.843912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.844110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.844145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.844325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.844357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.844640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.844674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.844882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.844917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.845025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.845059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.845241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.845276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.845393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.845425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.845691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.845725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.845922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.845957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.846125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.846158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.846343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.846376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.846629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.846662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.846879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.846921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.847172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.847205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.847529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.847564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.847749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.847783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.847958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.847994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.848219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.848482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.848514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.848707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.848740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.848927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.848966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.849161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.849195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.849383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.849416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.849632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.849665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.849947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.849984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.850127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.850160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.093 qpair failed and we were unable to recover it. 00:36:01.093 [2024-12-15 13:16:08.850306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.093 [2024-12-15 13:16:08.850340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.850613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.850647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.850810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.851063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.851097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.851290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.851324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.851598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.851785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.851818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.851973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.852142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.852176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.852415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.852448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.852662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.852695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.852953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.852989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.853129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.853162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.853323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.853368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.853558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.853593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.853800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.853845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.853969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.854002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.854133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.854166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.854299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.854335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.854558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.854592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.854790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.854839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.854987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.855020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.855125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.855156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.855343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.855376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.855576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.855738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.855772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.856046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.856095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.856394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.856540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.856573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.856749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.856782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.856954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.856988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.857220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.857365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.857595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.857627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.857849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.857883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.858013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.858046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.094 [2024-12-15 13:16:08.858241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.858275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 [2024-12-15 13:16:08.858514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.858547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.094 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:01.094 [2024-12-15 13:16:08.858748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.094 [2024-12-15 13:16:08.858784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.094 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.858952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.858988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.859117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.859150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.095 [2024-12-15 13:16:08.859362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.859396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.859662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.859695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.859835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.859870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.859976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.860183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.860355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.860779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.860959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.860992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.861180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.861213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.861480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.861518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.861804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.861847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.861976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.862009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.862138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.862170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.862362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.862394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.862624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.862657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.862948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.862983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.863121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.863296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.863328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.863456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.863489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.863738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.863771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.864004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.864039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.864183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.864215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.864338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.864369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.864659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.864693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.864908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.864943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.865130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.865163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.865288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.865321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.865455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.865487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.865699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.865732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.865950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.865986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.866129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.866161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.866401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.866434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.866737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.866770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.866960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.867134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.095 [2024-12-15 13:16:08.867166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.095 qpair failed and we were unable to recover it. 00:36:01.095 [2024-12-15 13:16:08.867382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.867415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.867529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.867567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.867740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.867771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.868055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.868349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.868383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.868562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.868593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.868778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.868812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.868968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.869002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.869224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.869484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.869517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.869817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.869867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.870114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.870148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.870387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.870420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.870674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.870707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.870948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.870983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.871169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.871378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.871411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.871676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.871708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.871923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.871957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.872086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.872119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.872384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.872417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.872629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.872662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.872952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.872987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.873253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.873285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.873528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.873560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.873796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.874030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.874063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.874235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.874268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.874386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.874426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.874601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.874633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.874895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.874930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.875171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.875203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.875477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.875511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.875629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.875662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.875943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.875977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.876219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.876251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.876427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.876461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.876743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.096 [2024-12-15 13:16:08.876775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.096 qpair failed and we were unable to recover it. 00:36:01.096 [2024-12-15 13:16:08.877051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.877086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.877372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.877406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.877606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.877639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.877886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.877921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.878066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.878100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.878363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.878397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.878593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.878626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.878805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.878849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.879034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.879067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.879316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.879349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.879560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.879594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.879785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.879819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.879971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.880005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.880204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.880236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.880440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.880473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.880653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.880687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.880922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.880957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.881138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.881176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.881305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.881338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.881545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.881749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.881782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.882063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.882098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.882241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.882535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.882568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.882862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.882898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.883091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.883125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.883315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.883348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.883543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.883576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.883763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.883796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f9cd0 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.884167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.884207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.884469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.884502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.884645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.884680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.884951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.884987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.885205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.885239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.097 [2024-12-15 13:16:08.885374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.097 [2024-12-15 13:16:08.885408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.097 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.885593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.885627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.885901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.885938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.886088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.886123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.886363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.886395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.886580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.886614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.886857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.886893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.887085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.887119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.887297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.887331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.887514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.887548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.887724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.887764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.888054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.888089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.888351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.888384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.888505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.888537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.888776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.888811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.888999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.889033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.889279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.889316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.889607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.889641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.889852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.889889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.890066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.890099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.890290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.890324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.890561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.890595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.890849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.890885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.891168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.891201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.891407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.891441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.891657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.891690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.891932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.891966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.892228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.892262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.892371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.892404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.892664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.892696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.892878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.892913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.893175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.893208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.893342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.893375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.893564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.893597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.893727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 Malloc0 00:36:01.098 [2024-12-15 13:16:08.893945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.894008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.894244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.894363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.894396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.098 [2024-12-15 13:16:08.894673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.098 [2024-12-15 13:16:08.894707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.098 qpair failed and we were unable to recover it. 00:36:01.098 [2024-12-15 13:16:08.894948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.894983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:01.099 [2024-12-15 13:16:08.895223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.895255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.099 [2024-12-15 13:16:08.895448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.895482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.099 [2024-12-15 13:16:08.895672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.895706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.895887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.896102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.896136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.896319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.896351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.896591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.896624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.896869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.897034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.897069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.897266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.897299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.897560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.897593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.897878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.897912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.898167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.898200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.898416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.898448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.898635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.898669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.898815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.898869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.899139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.899173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.899436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.899468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.899610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.899643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.899906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.899941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.900221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.900253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.900449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.900482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.900746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.900780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.900980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.901265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.901383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.099 [2024-12-15 13:16:08.901535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.901568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.901777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.901810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.902074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.902108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.902245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.902278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.902516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.902549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.902814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.902867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.903059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.903092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.903284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.903318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.903554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.903587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.903854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.903889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.904138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.904173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.904434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.099 [2024-12-15 13:16:08.904467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.099 qpair failed and we were unable to recover it. 00:36:01.099 [2024-12-15 13:16:08.904611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.904643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.904910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.904946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.905224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.905258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.905530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.905563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.905843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.905878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.906089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.906123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.906390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.906423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.906627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.906660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.906925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.906960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.907145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.907180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.907444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.907478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.907652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.907689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.907957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.907993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.908275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.908308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.908582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.908615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.908896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.908931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.909203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.909236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.909526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.909559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.909673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.909706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.909966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.910001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.910140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.100 [2024-12-15 13:16:08.910415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.910449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:01.100 [2024-12-15 13:16:08.910686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.910719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.910889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.910924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b9 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.100 0 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.911173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.911207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.911386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.911419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.911614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.911647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.911909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.911943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.912200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.912233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.912427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.912460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.912641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.912675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.912935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.912970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.913101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.913134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.913326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.913358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.913624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.913658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.913776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.913809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbaf0000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.914141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.914190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.100 qpair failed and we were unable to recover it. 00:36:01.100 [2024-12-15 13:16:08.914391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.100 [2024-12-15 13:16:08.914425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.914615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.914647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.914883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.914918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.915099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.915133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.915377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.915410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.915585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.915618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.915740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.915773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.916044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.916078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.916286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.916320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.916514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.916548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.916734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.916969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.917004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.917269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.917310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.917579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.917613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.917786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.917820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.918095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.918129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.918325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.918360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.918571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.918857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.918893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.919171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.919205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.919419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.919452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.919630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.919663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.919902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.919937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.920126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.920160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.920421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.920454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.920639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.920672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.920931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.920967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.921138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.921172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.921431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.921464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.921747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.921780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.921927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.921962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.101 [2024-12-15 13:16:08.922200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.922234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 [2024-12-15 13:16:08.922427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.922463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.101 [2024-12-15 13:16:08.922727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.101 [2024-12-15 13:16:08.922760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.101 qpair failed and we were unable to recover it. 00:36:01.101 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.102 [2024-12-15 13:16:08.922958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.922993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.102 [2024-12-15 13:16:08.923237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.923271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.923531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.923565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.923856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.923898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.924006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.924039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.924302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.924335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.924576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.924611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.924847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.925143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.925176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.925422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.925456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.925748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.926035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.926259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.926294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.926488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.926523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.926762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.926987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.927023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.927286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.927320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.927522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.927555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.927793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.927839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.928082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.928115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.928333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.928502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.928536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.928819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.928863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.929104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.929138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.929421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.929454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.929650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.929683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.929941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.929976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.102 [2024-12-15 13:16:08.930226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.930261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.930499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.930532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.102 [2024-12-15 13:16:08.930654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.930689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.930881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.930916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.102 [2024-12-15 13:16:08.931178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.931212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.102 [2024-12-15 13:16:08.931459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.931492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.931751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.931786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.931978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.932013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.932248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.932282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.932570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.102 [2024-12-15 13:16:08.932604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.102 qpair failed and we were unable to recover it. 00:36:01.102 [2024-12-15 13:16:08.932841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.103 [2024-12-15 13:16:08.932874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.103 qpair failed and we were unable to recover it. 00:36:01.103 [2024-12-15 13:16:08.933117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.103 [2024-12-15 13:16:08.933151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.103 qpair failed and we were unable to recover it. 00:36:01.103 [2024-12-15 13:16:08.933389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:01.103 [2024-12-15 13:16:08.933423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbae4000b90 with addr=10.0.0.2, port=4420 00:36:01.103 qpair failed and we were unable to recover it. 00:36:01.103 [2024-12-15 13:16:08.933601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.103 [2024-12-15 13:16:08.942110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.103 [2024-12-15 13:16:08.942299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.103 [2024-12-15 13:16:08.942345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.103 [2024-12-15 13:16:08.942370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.103 [2024-12-15 13:16:08.942391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.103 [2024-12-15 13:16:08.942442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.103 qpair failed and we were unable to recover it. 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.103 13:16:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1208058 00:36:01.103 [2024-12-15 13:16:08.951944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.103 [2024-12-15 13:16:08.952018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.103 [2024-12-15 13:16:08.952048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.103 [2024-12-15 13:16:08.952062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.103 [2024-12-15 13:16:08.952077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.103 [2024-12-15 13:16:08.952109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.103 qpair failed and we were unable to recover it. 00:36:01.363 [2024-12-15 13:16:08.962018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.363 [2024-12-15 13:16:08.962086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.363 [2024-12-15 13:16:08.962105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.363 [2024-12-15 13:16:08.962117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.363 [2024-12-15 13:16:08.962126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.363 [2024-12-15 13:16:08.962148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.363 qpair failed and we were unable to recover it. 00:36:01.363 [2024-12-15 13:16:08.972047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.363 [2024-12-15 13:16:08.972113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.363 [2024-12-15 13:16:08.972127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.363 [2024-12-15 13:16:08.972134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.363 [2024-12-15 13:16:08.972141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.363 [2024-12-15 13:16:08.972159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.363 qpair failed and we were unable to recover it. 00:36:01.363 [2024-12-15 13:16:08.981994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.363 [2024-12-15 13:16:08.982055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.363 [2024-12-15 13:16:08.982068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.363 [2024-12-15 13:16:08.982075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.363 [2024-12-15 13:16:08.982082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.363 [2024-12-15 13:16:08.982098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.363 qpair failed and we were unable to recover it. 00:36:01.363 [2024-12-15 13:16:08.992006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.363 [2024-12-15 13:16:08.992109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.363 [2024-12-15 13:16:08.992123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.363 [2024-12-15 13:16:08.992129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.363 [2024-12-15 13:16:08.992135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:08.992151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.001961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.002044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.002057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.002064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.002070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.002086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.011981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.012040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.012053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.012059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.012067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.012082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.022107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.022162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.022176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.022182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.022189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.022203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.032077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.032129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.032142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.032149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.032155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.032171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.042205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.042310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.042324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.042331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.042337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.042351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.052130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.052199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.052212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.052219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.052225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.052240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.062172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.062223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.062240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.062246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.062253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.062269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.072221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.072281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.072294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.072301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.072307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.072323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.082176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.082230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.082243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.082249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.082256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.082271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.092265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.092322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.092335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.092341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.092348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.092363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.102224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.102280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.102294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.102301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.102311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.102326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.112369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.112420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.112433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.112441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.112447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.112463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.122355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.364 [2024-12-15 13:16:09.122412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.364 [2024-12-15 13:16:09.122425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.364 [2024-12-15 13:16:09.122433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.364 [2024-12-15 13:16:09.122440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.364 [2024-12-15 13:16:09.122454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.364 qpair failed and we were unable to recover it. 00:36:01.364 [2024-12-15 13:16:09.132376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.132434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.132447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.132453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.132461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.132476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.142421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.142481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.142494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.142501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.142508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.142523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.152358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.152419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.152431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.152439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.152445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.152459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.162401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.162456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.162469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.162475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.162482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.162497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.172516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.172570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.172583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.172590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.172597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.172612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.182557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.182637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.182652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.182658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.182666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.182681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.192549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.192610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.192637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.192646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.192653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.192674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.202560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.202624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.202638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.202645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.202652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.202668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.212601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.212659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.212672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.212679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.212685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.212700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.222643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.222707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.222720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.222728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.222735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.222750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.232648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.232719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.232733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.232741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.232752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.232768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.242622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.242676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.242690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.242697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.242704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.242720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.252695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.252769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.252782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.252789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.365 [2024-12-15 13:16:09.252795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.365 [2024-12-15 13:16:09.252811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.365 qpair failed and we were unable to recover it. 00:36:01.365 [2024-12-15 13:16:09.262743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.365 [2024-12-15 13:16:09.262799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.365 [2024-12-15 13:16:09.262812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.365 [2024-12-15 13:16:09.262820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.366 [2024-12-15 13:16:09.262831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.366 [2024-12-15 13:16:09.262848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.366 qpair failed and we were unable to recover it. 00:36:01.626 [2024-12-15 13:16:09.272729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.626 [2024-12-15 13:16:09.272785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.626 [2024-12-15 13:16:09.272798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.626 [2024-12-15 13:16:09.272804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.626 [2024-12-15 13:16:09.272811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.626 [2024-12-15 13:16:09.272831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.626 qpair failed and we were unable to recover it. 00:36:01.626 [2024-12-15 13:16:09.282719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.626 [2024-12-15 13:16:09.282774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.626 [2024-12-15 13:16:09.282787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.626 [2024-12-15 13:16:09.282794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.626 [2024-12-15 13:16:09.282801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.626 [2024-12-15 13:16:09.282816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.626 qpair failed and we were unable to recover it. 00:36:01.626 [2024-12-15 13:16:09.292821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.626 [2024-12-15 13:16:09.292880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.626 [2024-12-15 13:16:09.292894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.626 [2024-12-15 13:16:09.292901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.626 [2024-12-15 13:16:09.292907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.626 [2024-12-15 13:16:09.292923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.626 qpair failed and we were unable to recover it. 00:36:01.626 [2024-12-15 13:16:09.302858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.626 [2024-12-15 13:16:09.302914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.626 [2024-12-15 13:16:09.302927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.626 [2024-12-15 13:16:09.302934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.626 [2024-12-15 13:16:09.302941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.626 [2024-12-15 13:16:09.302956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.626 qpair failed and we were unable to recover it. 00:36:01.626 [2024-12-15 13:16:09.312889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.626 [2024-12-15 13:16:09.312968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.626 [2024-12-15 13:16:09.312982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.312989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.312995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.313011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.322939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.322999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.323016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.323024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.323031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.323046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.332941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.332995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.333008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.333015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.333021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.333036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.342989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.343044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.343057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.343064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.343070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.343086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.352996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.353051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.353063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.353070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.353077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.353091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.363094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.363147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.363161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.363172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.363179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.363194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.373041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.373097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.373110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.373117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.373123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.373138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.383076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.383129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.383142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.383149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.383156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.383171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.393148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.393213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.393226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.393233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.393239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.393254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.403160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.403212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.403224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.403231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.403237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.403252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.413162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.413220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.413233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.413240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.413247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.413262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.423233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.423288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.423302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.423309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.423315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.423330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.433252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.433308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.433322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.433329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.433335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.433350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.627 [2024-12-15 13:16:09.443242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.627 [2024-12-15 13:16:09.443293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.627 [2024-12-15 13:16:09.443306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.627 [2024-12-15 13:16:09.443313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.627 [2024-12-15 13:16:09.443320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.627 [2024-12-15 13:16:09.443334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.627 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.453278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.453338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.453351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.453358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.453365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.453380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.463305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.463361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.463374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.463380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.463387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.463401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.473322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.473376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.473389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.473395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.473402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.473417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.483367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.483421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.483434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.483441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.483448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.483462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.493401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.493458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.493471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.493480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.493486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.493502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.503421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.503476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.503489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.503496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.503504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.503518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.513461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.513537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.513550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.513557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.513564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.513579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.628 [2024-12-15 13:16:09.523490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.628 [2024-12-15 13:16:09.523546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.628 [2024-12-15 13:16:09.523559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.628 [2024-12-15 13:16:09.523567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.628 [2024-12-15 13:16:09.523574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.628 [2024-12-15 13:16:09.523589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.628 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.533525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.533580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.533594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.533601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.533607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.533626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.543578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.543635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.543648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.543654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.543661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.543676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.553581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.553631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.553644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.553651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.553658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.553672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.563607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.563661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.563674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.563681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.563687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.563703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.573637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.573695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.573708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.573714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.573721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.573735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.583688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.583749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.583763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.583771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.583777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.583791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.593682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.593736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.593749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.593756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.889 [2024-12-15 13:16:09.593763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.889 [2024-12-15 13:16:09.593778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.889 qpair failed and we were unable to recover it. 00:36:01.889 [2024-12-15 13:16:09.603708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.889 [2024-12-15 13:16:09.603763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.889 [2024-12-15 13:16:09.603776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.889 [2024-12-15 13:16:09.603783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.603790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.603804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.613749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.613806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.613819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.613835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.613842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.613856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.623791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.623844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.623860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.623867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.623874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.623889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.633797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.633860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.633873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.633880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.633887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.633902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.643871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.643924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.643937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.643943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.643950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.643964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.653870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.653971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.653987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.653993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.654000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.654015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.663918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.663972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.663985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.663992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.664001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.664016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.673932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.673985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.673999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.674005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.674012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.674027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.683947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.683999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.684012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.684019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.684025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.684040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.694027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.694084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.694098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.694104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.694111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.694126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.704021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.704076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.704088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.704095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.704101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.704116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.714048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.714106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.714119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.714127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.714133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.890 [2024-12-15 13:16:09.714148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.890 qpair failed and we were unable to recover it. 00:36:01.890 [2024-12-15 13:16:09.724072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.890 [2024-12-15 13:16:09.724123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.890 [2024-12-15 13:16:09.724136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.890 [2024-12-15 13:16:09.724143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.890 [2024-12-15 13:16:09.724149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.724164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.734095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.734153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.734166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.734173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.734180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.734195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.744239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.744315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.744329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.744336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.744342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.744357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.754195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.754254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.754270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.754277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.754283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.754298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.764236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.764289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.764302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.764308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.764314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.764329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.774255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.774315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.774327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.774334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.774340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.774355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.784233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.784290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.784304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.784310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.784316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.784331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:01.891 [2024-12-15 13:16:09.794289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.891 [2024-12-15 13:16:09.794371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.891 [2024-12-15 13:16:09.794385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.891 [2024-12-15 13:16:09.794393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.891 [2024-12-15 13:16:09.794402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:01.891 [2024-12-15 13:16:09.794417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:01.891 qpair failed and we were unable to recover it. 00:36:02.151 [2024-12-15 13:16:09.804287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.151 [2024-12-15 13:16:09.804351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.151 [2024-12-15 13:16:09.804363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.151 [2024-12-15 13:16:09.804371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.151 [2024-12-15 13:16:09.804377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.151 [2024-12-15 13:16:09.804391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.151 qpair failed and we were unable to recover it. 00:36:02.151 [2024-12-15 13:16:09.814278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.151 [2024-12-15 13:16:09.814335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.151 [2024-12-15 13:16:09.814348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.151 [2024-12-15 13:16:09.814354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.151 [2024-12-15 13:16:09.814361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.151 [2024-12-15 13:16:09.814376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.151 qpair failed and we were unable to recover it. 00:36:02.151 [2024-12-15 13:16:09.824339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.151 [2024-12-15 13:16:09.824396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.151 [2024-12-15 13:16:09.824408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.151 [2024-12-15 13:16:09.824415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.151 [2024-12-15 13:16:09.824421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.151 [2024-12-15 13:16:09.824436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.151 qpair failed and we were unable to recover it. 00:36:02.151 [2024-12-15 13:16:09.834387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.151 [2024-12-15 13:16:09.834438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.151 [2024-12-15 13:16:09.834452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.151 [2024-12-15 13:16:09.834459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.151 [2024-12-15 13:16:09.834465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.151 [2024-12-15 13:16:09.834479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.151 qpair failed and we were unable to recover it. 00:36:02.151 [2024-12-15 13:16:09.844427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.844481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.844494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.844501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.844507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.844522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.854389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.854444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.854458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.854465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.854472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.854487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.864468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.864522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.864535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.864541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.864547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.864562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.874521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.874574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.874587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.874594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.874600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.874615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.884430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.884490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.884506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.884514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.884520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.884535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.894557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.894613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.894626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.894633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.894639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.894654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.904509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.904566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.904579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.904586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.904592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.904606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.914600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.914661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.914675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.914682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.914688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.914703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.924624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.924688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.924701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.924714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.924720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.924735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.934673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.934731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.934744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.934751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.934758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.934772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.944698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.944761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.944774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.944782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.944788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.944802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.954810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.954892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.954907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.152 [2024-12-15 13:16:09.954913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.152 [2024-12-15 13:16:09.954920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.152 [2024-12-15 13:16:09.954936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.152 qpair failed and we were unable to recover it. 00:36:02.152 [2024-12-15 13:16:09.964743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.152 [2024-12-15 13:16:09.964806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.152 [2024-12-15 13:16:09.964819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:09.964830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:09.964836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:09.964852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:09.974777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:09.974837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:09.974851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:09.974858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:09.974864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:09.974878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:09.984799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:09.984866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:09.984879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:09.984887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:09.984893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:09.984907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:09.994821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:09.994909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:09.994922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:09.994929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:09.994936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:09.994951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.004854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.004917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.004931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.004938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.004945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.004959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.014841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.014907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.014924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.014932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.014939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.014956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.024934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.024994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.025007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.025015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.025022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.025037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.034989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.035061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.035079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.035087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.035094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.035111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.045015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.045065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.045079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.045085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.045092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.045107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.153 [2024-12-15 13:16:10.055051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.153 [2024-12-15 13:16:10.055110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.153 [2024-12-15 13:16:10.055123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.153 [2024-12-15 13:16:10.055133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.153 [2024-12-15 13:16:10.055139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.153 [2024-12-15 13:16:10.055154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.153 qpair failed and we were unable to recover it. 00:36:02.413 [2024-12-15 13:16:10.065063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.413 [2024-12-15 13:16:10.065124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.413 [2024-12-15 13:16:10.065138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.413 [2024-12-15 13:16:10.065145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.413 [2024-12-15 13:16:10.065151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.413 [2024-12-15 13:16:10.065166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.413 qpair failed and we were unable to recover it. 00:36:02.413 [2024-12-15 13:16:10.075068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.413 [2024-12-15 13:16:10.075124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.413 [2024-12-15 13:16:10.075137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.413 [2024-12-15 13:16:10.075144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.413 [2024-12-15 13:16:10.075150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.413 [2024-12-15 13:16:10.075165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.085120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.085173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.085186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.085193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.085199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.085213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.095131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.095190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.095203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.095211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.095217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.095235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.105148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.105203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.105216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.105223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.105229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.105245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.115188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.115244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.115257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.115264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.115270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.115285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.125212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.125266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.125280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.125286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.125293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.125307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.135251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.135308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.135321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.135328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.135334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.135349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.145267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.145321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.145334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.145341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.145348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.145363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.155289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.155346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.155359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.155367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.155374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.155388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.165317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.165371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.165385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.165391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.165398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.165412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.175345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.175397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.175410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.175416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.175423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.175437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.185394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.185448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.185464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.185471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.185477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.185492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.195419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.195496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.195511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.195518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.195524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.195539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.205438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.414 [2024-12-15 13:16:10.205504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.414 [2024-12-15 13:16:10.205517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.414 [2024-12-15 13:16:10.205524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.414 [2024-12-15 13:16:10.205531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.414 [2024-12-15 13:16:10.205544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.414 qpair failed and we were unable to recover it. 00:36:02.414 [2024-12-15 13:16:10.215485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.215542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.215555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.215561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.215568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.215582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.225518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.225605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.225618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.225624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.225635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.225649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.235532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.235595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.235608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.235615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.235621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.235635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.245565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.245619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.245632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.245639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.245646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.245660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.255524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.255590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.255603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.255610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.255616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.255631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.265630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.265683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.265695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.265702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.265709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.265724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.275652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.275732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.275745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.275752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.275759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.275773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.285705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.285773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.285786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.285792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.285799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.285813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.295714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.295768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.295781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.295787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.295793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.295808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.305666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.305724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.305737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.305744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.305750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.305765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.415 [2024-12-15 13:16:10.315693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.415 [2024-12-15 13:16:10.315748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.415 [2024-12-15 13:16:10.315764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.415 [2024-12-15 13:16:10.315771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.415 [2024-12-15 13:16:10.315778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.415 [2024-12-15 13:16:10.315792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.415 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.325789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.676 [2024-12-15 13:16:10.325851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.676 [2024-12-15 13:16:10.325864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.676 [2024-12-15 13:16:10.325873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.676 [2024-12-15 13:16:10.325879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.676 [2024-12-15 13:16:10.325894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.676 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.335746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.676 [2024-12-15 13:16:10.335817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.676 [2024-12-15 13:16:10.335833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.676 [2024-12-15 13:16:10.335840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.676 [2024-12-15 13:16:10.335846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.676 [2024-12-15 13:16:10.335861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.676 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.345852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.676 [2024-12-15 13:16:10.345908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.676 [2024-12-15 13:16:10.345921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.676 [2024-12-15 13:16:10.345927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.676 [2024-12-15 13:16:10.345933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.676 [2024-12-15 13:16:10.345948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.676 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.355872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.676 [2024-12-15 13:16:10.355926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.676 [2024-12-15 13:16:10.355939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.676 [2024-12-15 13:16:10.355946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.676 [2024-12-15 13:16:10.355955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.676 [2024-12-15 13:16:10.355970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.676 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.365865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.676 [2024-12-15 13:16:10.365957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.676 [2024-12-15 13:16:10.365970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.676 [2024-12-15 13:16:10.365977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.676 [2024-12-15 13:16:10.365983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.676 [2024-12-15 13:16:10.365998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.676 qpair failed and we were unable to recover it. 00:36:02.676 [2024-12-15 13:16:10.375940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.375995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.376007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.376014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.376020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.376035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.386030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.386116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.386130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.386137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.386143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.386158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.395910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.395969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.395982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.395989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.395996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.396011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.406036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.406122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.406135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.406142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.406148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.406162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.416063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.416120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.416134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.416140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.416147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.416162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.426064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.426125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.426138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.426145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.426152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.426166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.436037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.436087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.436100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.436106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.436113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.436129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.446071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.446122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.446138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.446145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.446151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.446165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.456170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.456226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.456238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.456245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.456251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.456265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.466140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.466194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.466207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.466213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.466219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.466235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.476209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.476262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.476275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.476282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.476288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.476302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.486220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.486270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.486283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.486293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.486299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.486314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.496285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.496351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.496365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.496372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.677 [2024-12-15 13:16:10.496378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.677 [2024-12-15 13:16:10.496392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.677 qpair failed and we were unable to recover it. 00:36:02.677 [2024-12-15 13:16:10.506306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.677 [2024-12-15 13:16:10.506357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.677 [2024-12-15 13:16:10.506370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.677 [2024-12-15 13:16:10.506377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.506383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.506398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.516264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.516319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.516332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.516338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.516345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.516359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.526297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.526354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.526367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.526374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.526380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.526398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.536414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.536488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.536501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.536508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.536514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.536529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.546341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.546400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.546413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.546419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.546425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.546441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.556374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.556440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.556452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.556460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.556466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.556480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.566482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.566558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.566570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.566577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.566583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.566598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.678 [2024-12-15 13:16:10.576448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.678 [2024-12-15 13:16:10.576507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.678 [2024-12-15 13:16:10.576521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.678 [2024-12-15 13:16:10.576528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.678 [2024-12-15 13:16:10.576534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.678 [2024-12-15 13:16:10.576549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.678 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.586561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.586633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.586647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.586654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.586660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.586675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.596492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.596548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.596562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.596569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.596576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.596591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.606538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.606611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.606624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.606631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.606637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.606652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.616566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.616621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.616635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.616645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.616651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.616666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.626678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.626734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.626747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.626754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.626760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.626774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.636691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.636755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.636767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.636775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.636781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.636795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.646706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.646760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.646773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.646780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.646786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.646801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.656687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.656755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.656768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.656775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.656781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.656801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.666811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.666896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.666909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.666916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.666922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.666937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.676783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.676843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.676856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.676863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.676869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.676885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.686778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.686876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.686889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.939 [2024-12-15 13:16:10.686897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.939 [2024-12-15 13:16:10.686903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.939 [2024-12-15 13:16:10.686917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.939 qpair failed and we were unable to recover it. 00:36:02.939 [2024-12-15 13:16:10.696803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.939 [2024-12-15 13:16:10.696867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.939 [2024-12-15 13:16:10.696880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.696887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.696893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.696908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.706818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.706880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.706893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.706900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.706906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.706920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.716859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.716915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.716928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.716934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.716941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.716956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.726942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.727017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.727032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.727039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.727045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.727060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.736996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.737052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.737065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.737072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.737078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.737094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.746938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.747000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.747016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.747024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.747029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.747044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.757033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.757085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.757098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.757104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.757110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.757125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.767042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.767093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.767106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.767113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.767119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.767133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.777045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.777102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.777115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.777121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.777128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.777143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.787099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.787162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.787174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.787181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.787191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.787206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.797094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.797146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.797159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.797166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.797172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.797187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.807177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.807228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.807242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.807248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.807254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.807269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.817207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.817263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.817276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.817282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.817288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.940 [2024-12-15 13:16:10.817304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.940 qpair failed and we were unable to recover it. 00:36:02.940 [2024-12-15 13:16:10.827236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.940 [2024-12-15 13:16:10.827285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.940 [2024-12-15 13:16:10.827298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.940 [2024-12-15 13:16:10.827305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.940 [2024-12-15 13:16:10.827311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.941 [2024-12-15 13:16:10.827325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.941 qpair failed and we were unable to recover it. 00:36:02.941 [2024-12-15 13:16:10.837296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.941 [2024-12-15 13:16:10.837350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.941 [2024-12-15 13:16:10.837363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.941 [2024-12-15 13:16:10.837369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.941 [2024-12-15 13:16:10.837376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:02.941 [2024-12-15 13:16:10.837390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:02.941 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.847286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.847340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.847353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.847359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.847366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.847381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.857329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.857385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.857398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.857405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.857412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.857427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.867380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.867440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.867454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.867462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.867468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.867483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.877362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.877420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.877436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.877443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.877449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.877464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.887361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.887415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.887428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.887434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.887441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.887456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.897477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.897583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.897596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.897603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.897609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.897623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.907448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.907504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.907517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.907524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.907530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.907544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.917470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.917572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.201 [2024-12-15 13:16:10.917585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.201 [2024-12-15 13:16:10.917592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.201 [2024-12-15 13:16:10.917601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.201 [2024-12-15 13:16:10.917615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.201 qpair failed and we were unable to recover it. 00:36:03.201 [2024-12-15 13:16:10.927528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.201 [2024-12-15 13:16:10.927609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.927623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.927630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.927636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.927651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.937551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.937617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.937630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.937637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.937643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.937657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.947561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.947611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.947624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.947630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.947636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.947651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.957601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.957667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.957681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.957688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.957694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.957708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.967627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.967690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.967703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.967711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.967717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.967731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.977674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.977739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.977752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.977759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.977765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.977780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.987697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.987765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.987779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.987786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.987793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.987807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:10.997708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:10.997762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:10.997775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:10.997782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:10.997789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:10.997803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.007756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.007817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.007837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.007843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.007850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:11.007865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.017772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.017839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.017852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.017859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.017865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:11.017880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.027862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.027932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.027946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.027953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.027959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:11.027974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.037757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.037807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.037820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.037831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.037837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:11.037853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.047877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.047964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.047977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.047986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.047993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.202 [2024-12-15 13:16:11.048007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.202 qpair failed and we were unable to recover it. 00:36:03.202 [2024-12-15 13:16:11.057898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.202 [2024-12-15 13:16:11.057965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.202 [2024-12-15 13:16:11.057978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.202 [2024-12-15 13:16:11.057985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.202 [2024-12-15 13:16:11.057991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.203 [2024-12-15 13:16:11.058006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.203 qpair failed and we were unable to recover it. 00:36:03.203 [2024-12-15 13:16:11.067922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.203 [2024-12-15 13:16:11.067988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.203 [2024-12-15 13:16:11.068000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.203 [2024-12-15 13:16:11.068007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.203 [2024-12-15 13:16:11.068014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.203 [2024-12-15 13:16:11.068028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.203 qpair failed and we were unable to recover it. 00:36:03.203 [2024-12-15 13:16:11.077950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.203 [2024-12-15 13:16:11.078010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.203 [2024-12-15 13:16:11.078023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.203 [2024-12-15 13:16:11.078029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.203 [2024-12-15 13:16:11.078035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.203 [2024-12-15 13:16:11.078050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.203 qpair failed and we were unable to recover it. 00:36:03.203 [2024-12-15 13:16:11.087968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.203 [2024-12-15 13:16:11.088035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.203 [2024-12-15 13:16:11.088048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.203 [2024-12-15 13:16:11.088054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.203 [2024-12-15 13:16:11.088061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.203 [2024-12-15 13:16:11.088079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.203 qpair failed and we were unable to recover it. 00:36:03.203 [2024-12-15 13:16:11.097977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.203 [2024-12-15 13:16:11.098048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.203 [2024-12-15 13:16:11.098061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.203 [2024-12-15 13:16:11.098068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.203 [2024-12-15 13:16:11.098074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.203 [2024-12-15 13:16:11.098089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.203 qpair failed and we were unable to recover it. 00:36:03.463 [2024-12-15 13:16:11.107985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.463 [2024-12-15 13:16:11.108043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.463 [2024-12-15 13:16:11.108056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.463 [2024-12-15 13:16:11.108063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.463 [2024-12-15 13:16:11.108070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.463 [2024-12-15 13:16:11.108085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.463 [2024-12-15 13:16:11.118024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.463 [2024-12-15 13:16:11.118120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.463 [2024-12-15 13:16:11.118134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.463 [2024-12-15 13:16:11.118141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.463 [2024-12-15 13:16:11.118147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.463 [2024-12-15 13:16:11.118162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.463 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.128125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.128193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.128206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.128213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.128219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.128234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.138072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.138133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.138146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.138152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.138159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.138173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.148150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.148212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.148225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.148232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.148238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.148253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.158129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.158192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.158204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.158211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.158217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.158232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.168215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.168269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.168282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.168288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.168295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.168309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.178255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.178336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.178350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.178360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.178366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.178381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.188275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.188329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.188342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.188349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.188355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.188370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.198280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.198329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.198341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.198348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.198354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.198369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.208329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.208383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.208396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.208402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.208409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.208424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.218286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.218354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.218367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.218374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.218380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.218398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.228386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.228441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.228453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.228460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.228466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.228481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.238337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.238393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.238406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.238412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.238419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.238433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.248436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.248490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.464 [2024-12-15 13:16:11.248502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.464 [2024-12-15 13:16:11.248508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.464 [2024-12-15 13:16:11.248515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.464 [2024-12-15 13:16:11.248529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.464 qpair failed and we were unable to recover it. 00:36:03.464 [2024-12-15 13:16:11.258530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.464 [2024-12-15 13:16:11.258599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.258611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.258618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.258625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.258640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.268498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.268555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.268568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.268575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.268582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.268596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.278533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.278588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.278601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.278608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.278615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.278629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.288565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.288619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.288632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.288639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.288646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.288660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.298608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.298668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.298682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.298688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.298694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.298710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.308629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.308711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.308727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.308734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.308741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.308755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.318675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.318739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.318752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.318760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.318766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.318781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.328650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.328705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.328718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.328725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.328731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.328747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.338750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.338810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.338826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.338834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.338841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.338856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.348736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.348792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.348806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.348812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.348828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.348844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.465 [2024-12-15 13:16:11.358765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.465 [2024-12-15 13:16:11.358818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.465 [2024-12-15 13:16:11.358834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.465 [2024-12-15 13:16:11.358841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.465 [2024-12-15 13:16:11.358847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.465 [2024-12-15 13:16:11.358863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.465 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.368830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.368895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.368910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.368917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.368924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.368939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.378831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.378893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.378906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.378913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.378920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.378935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.388879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.388937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.388950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.388957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.388963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.388979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.398871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.398927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.398941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.398947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.398954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.398970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.408901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.408959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.408972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.408979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.408986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.409001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.418931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.419014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.419028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.419035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.419041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.419057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.428959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.429014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.429028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.429034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.429041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.429057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.438992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.439042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.439059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.439065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.439071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.439086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.448953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.449004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.449018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.449025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.449031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.449046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.459054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.459129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.459142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.459149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.459155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.459170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.469079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.469135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.469149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.469156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.469162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.469177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.479145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.479213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.479226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.479233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.479242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.479258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.489129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.726 [2024-12-15 13:16:11.489185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.726 [2024-12-15 13:16:11.489199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.726 [2024-12-15 13:16:11.489206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.726 [2024-12-15 13:16:11.489212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.726 [2024-12-15 13:16:11.489227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.726 qpair failed and we were unable to recover it. 00:36:03.726 [2024-12-15 13:16:11.499228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.499333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.499346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.499353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.499359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.499374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.509232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.509290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.509303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.509310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.509317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.509331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.519249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.519306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.519319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.519327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.519334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.519348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.529235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.529301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.529314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.529321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.529327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.529342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.539326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.539399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.539412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.539419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.539425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.539440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.549301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.549360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.549372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.549378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.549385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.549399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.559327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.559380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.559393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.559400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.559406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.559421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.569379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.569444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.569460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.569467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.569473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.569486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.579434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.579491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.579504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.579511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.579517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.579532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.589410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.589464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.589477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.589484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.589490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.589504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.599469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.599533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.599547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.599553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.599559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.599574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.609460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.609513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.609525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.609534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.609541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.609555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.619509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.619572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.619585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.727 [2024-12-15 13:16:11.619592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.727 [2024-12-15 13:16:11.619598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.727 [2024-12-15 13:16:11.619613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.727 qpair failed and we were unable to recover it. 00:36:03.727 [2024-12-15 13:16:11.629529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.727 [2024-12-15 13:16:11.629581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.727 [2024-12-15 13:16:11.629593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.728 [2024-12-15 13:16:11.629600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.728 [2024-12-15 13:16:11.629606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.728 [2024-12-15 13:16:11.629621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.728 qpair failed and we were unable to recover it. 00:36:03.988 [2024-12-15 13:16:11.639575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.988 [2024-12-15 13:16:11.639651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.988 [2024-12-15 13:16:11.639665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.988 [2024-12-15 13:16:11.639672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.988 [2024-12-15 13:16:11.639679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.988 [2024-12-15 13:16:11.639695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-12-15 13:16:11.649565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.988 [2024-12-15 13:16:11.649623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.988 [2024-12-15 13:16:11.649635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.988 [2024-12-15 13:16:11.649642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.988 [2024-12-15 13:16:11.649648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.988 [2024-12-15 13:16:11.649666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-12-15 13:16:11.659618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.988 [2024-12-15 13:16:11.659704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.988 [2024-12-15 13:16:11.659717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.988 [2024-12-15 13:16:11.659724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.988 [2024-12-15 13:16:11.659730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.988 [2024-12-15 13:16:11.659746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-12-15 13:16:11.669643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.988 [2024-12-15 13:16:11.669705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.988 [2024-12-15 13:16:11.669718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.988 [2024-12-15 13:16:11.669725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.988 [2024-12-15 13:16:11.669732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.988 [2024-12-15 13:16:11.669746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.988 [2024-12-15 13:16:11.679658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.988 [2024-12-15 13:16:11.679712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.988 [2024-12-15 13:16:11.679726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.988 [2024-12-15 13:16:11.679733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.988 [2024-12-15 13:16:11.679739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.988 [2024-12-15 13:16:11.679755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.988 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.689712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.689770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.689783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.689790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.689797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.689812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.699741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.699816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.699834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.699841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.699848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.699864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.709753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.709836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.709851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.709858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.709864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.709878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.719827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.719877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.719891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.719897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.719904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.719919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.729834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.729888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.729902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.729921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.729927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.729960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.739844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.739917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.739931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.739942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.739948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.739964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.749882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.749954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.749966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.749973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.749980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.749994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.759911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.759973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.759987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.759993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.759999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.760015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.769916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.769972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.769985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.769992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.769998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.770013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.779979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.780032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.780045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.780051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.780058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.780076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.789992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.790067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.790081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.790087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.790094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.790109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.799995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.800051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.800064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.800071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.800077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.800092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.810038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.810092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.989 [2024-12-15 13:16:11.810105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.989 [2024-12-15 13:16:11.810111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.989 [2024-12-15 13:16:11.810118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.989 [2024-12-15 13:16:11.810133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.989 qpair failed and we were unable to recover it. 00:36:03.989 [2024-12-15 13:16:11.820055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.989 [2024-12-15 13:16:11.820113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.820126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.820133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.820139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.820154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.830122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.830195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.830208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.830215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.830221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.830236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.840115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.840170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.840183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.840190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.840197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.840211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.850153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.850212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.850225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.850232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.850238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.850253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.860107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.860165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.860178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.860185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.860191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.860206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.870250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.870322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.870338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.870345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.870351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.870366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.880217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.880290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.880304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.880310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.880317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.880332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:03.990 [2024-12-15 13:16:11.890243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:03.990 [2024-12-15 13:16:11.890311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:03.990 [2024-12-15 13:16:11.890325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:03.990 [2024-12-15 13:16:11.890332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:03.990 [2024-12-15 13:16:11.890340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:03.990 [2024-12-15 13:16:11.890357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.990 qpair failed and we were unable to recover it. 00:36:04.250 [2024-12-15 13:16:11.900278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.250 [2024-12-15 13:16:11.900333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.250 [2024-12-15 13:16:11.900347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.250 [2024-12-15 13:16:11.900354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.250 [2024-12-15 13:16:11.900360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.900376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.910252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.910309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.910322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.910330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.910339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.910354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.920348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.920401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.920414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.920421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.920427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.920442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.930289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.930386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.930399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.930406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.930412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.930427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.940385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.940442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.940454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.940461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.940468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.940482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.950397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.950455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.950468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.950474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.950480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.950495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.960391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.960443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.960457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.960463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.960469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.960484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.970416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.970472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.970485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.970492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.970499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.970513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.980498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.980556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.980569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.980575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.980582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.980597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:11.990528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:11.990582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:11.990595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:11.990602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:11.990607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:11.990622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:12.000587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:12.000643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:12.000659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:12.000666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:12.000673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:12.000687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:12.010587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:12.010643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:12.010656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:12.010663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:12.010670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:12.010684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:12.020648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:12.020717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:12.020730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:12.020737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:12.020744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:12.020759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:12.030645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.251 [2024-12-15 13:16:12.030750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.251 [2024-12-15 13:16:12.030763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.251 [2024-12-15 13:16:12.030770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.251 [2024-12-15 13:16:12.030776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.251 [2024-12-15 13:16:12.030791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.251 qpair failed and we were unable to recover it. 00:36:04.251 [2024-12-15 13:16:12.040687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.040744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.040757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.040765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.040778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.040792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.050741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.050795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.050808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.050815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.050821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.050840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.060790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.060853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.060866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.060873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.060880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.060895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.070790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.070874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.070888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.070895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.070901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.070915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.080856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.080960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.080974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.080980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.080986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.081001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.090821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.090881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.090894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.090901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.090907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.090922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.100893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.100949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.100962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.100968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.100975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.100990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.110902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.110958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.110971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.110978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.110984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.111000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.120865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.120925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.120939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.120946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.120952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.120967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.130951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.131005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.131018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.131025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.131031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.131046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.140988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.141083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.141096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.141103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.141109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.141124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.252 [2024-12-15 13:16:12.151037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.252 [2024-12-15 13:16:12.151115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.252 [2024-12-15 13:16:12.151128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.252 [2024-12-15 13:16:12.151134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.252 [2024-12-15 13:16:12.151141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.252 [2024-12-15 13:16:12.151154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.252 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.161039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.161105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.161118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.161125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.161131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.513 [2024-12-15 13:16:12.161145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.171050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.171106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.171119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.171130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.171136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.513 [2024-12-15 13:16:12.171151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.181097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.181155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.181169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.181175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.181182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.513 [2024-12-15 13:16:12.181197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.191102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.191152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.191165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.191172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.191178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.513 [2024-12-15 13:16:12.191193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.201165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.201222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.201235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.201242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.201248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.513 [2024-12-15 13:16:12.201263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.513 qpair failed and we were unable to recover it. 00:36:04.513 [2024-12-15 13:16:12.211128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.513 [2024-12-15 13:16:12.211182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.513 [2024-12-15 13:16:12.211195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.513 [2024-12-15 13:16:12.211204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.513 [2024-12-15 13:16:12.211211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.211229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.221227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.221303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.221316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.221323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.221329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.221344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.231187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.231239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.231252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.231259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.231265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.231280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.241246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.241324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.241337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.241344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.241350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.241364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.251287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.251340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.251353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.251359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.251366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.251381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.261323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.261380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.261393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.261399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.261406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.261420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.271373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.271426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.271439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.271445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.271452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.271467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.281390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.281459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.281472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.281479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.281484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.281499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.291420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.291482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.291495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.291502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.291508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.291522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.301456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.301518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.301531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.301541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.301547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.301561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.311480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.311544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.311557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.311563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.311570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.311584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.321514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.321576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.321589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.321596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.321603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.321617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.331528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.331590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.331603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.331611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.331617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.331631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.341579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.514 [2024-12-15 13:16:12.341644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.514 [2024-12-15 13:16:12.341657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.514 [2024-12-15 13:16:12.341664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.514 [2024-12-15 13:16:12.341670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.514 [2024-12-15 13:16:12.341688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.514 qpair failed and we were unable to recover it. 00:36:04.514 [2024-12-15 13:16:12.351604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.351665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.351678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.351686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.351692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.351707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.361620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.361674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.361687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.361695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.361701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.361715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.371677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.371738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.371750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.371757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.371764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.371779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.381636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.381693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.381707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.381713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.381720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.381734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.391711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.391776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.391789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.391796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.391802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.391817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.401732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.401796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.401810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.401817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.401823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.401841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.515 [2024-12-15 13:16:12.411766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.515 [2024-12-15 13:16:12.411832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.515 [2024-12-15 13:16:12.411846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.515 [2024-12-15 13:16:12.411853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.515 [2024-12-15 13:16:12.411859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.515 [2024-12-15 13:16:12.411874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.515 qpair failed and we were unable to recover it. 00:36:04.775 [2024-12-15 13:16:12.421807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.775 [2024-12-15 13:16:12.421875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.775 [2024-12-15 13:16:12.421889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.421897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.421903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.421918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.431844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.431952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.431968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.431974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.431980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.431995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.441868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.441940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.441953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.441960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.441966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.441981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.451807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.451866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.451880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.451887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.451893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.451907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.461934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.462001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.462014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.462021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.462027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.462041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.471946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.471998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.472011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.472018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.472027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.472043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.481916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.481967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.481979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.481986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.481992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.482007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.492006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.492099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.492112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.492119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.492125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.492140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.502070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.502146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.502159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.502166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.502172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.502187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.512072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.512130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.512143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.512150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.512156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.512171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.522099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.522155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.522168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.522175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.522183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.522197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.532136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.532209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.532222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.532228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.532234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.532249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.542163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.542239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.542254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.542260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.542267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.776 [2024-12-15 13:16:12.542281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.776 qpair failed and we were unable to recover it. 00:36:04.776 [2024-12-15 13:16:12.552177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.776 [2024-12-15 13:16:12.552229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.776 [2024-12-15 13:16:12.552242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.776 [2024-12-15 13:16:12.552248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.776 [2024-12-15 13:16:12.552254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.552269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.562197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.562253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.562269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.562276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.562282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.562297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.572225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.572276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.572289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.572296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.572302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.572316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.582239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.582296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.582309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.582316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.582322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.582336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.592284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.592347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.592359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.592367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.592373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.592388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.602320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.602376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.602390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.602396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.602405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.602419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.612379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.612443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.612455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.612462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.612468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.612483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.622342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.622440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.622453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.622459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.622465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.622479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.632411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.632477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.632490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.632496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.632503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.632517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.642444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.642508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.642521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.642528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.642534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.642547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.652471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.652526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.652538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.652545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.652551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.652566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.662509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.662576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.662588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.662595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.662601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.662616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:04.777 [2024-12-15 13:16:12.672536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:04.777 [2024-12-15 13:16:12.672594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:04.777 [2024-12-15 13:16:12.672607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:04.777 [2024-12-15 13:16:12.672614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:04.777 [2024-12-15 13:16:12.672620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:04.777 [2024-12-15 13:16:12.672635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:04.777 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.682557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.682636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.682649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.682656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.682663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.682678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.692524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.692579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.692593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.692600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.692606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.692621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.702639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.702737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.702750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.702757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.702763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.702777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.712654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.712732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.712745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.712752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.712758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.712772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.722678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.722748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.722761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.722768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.722775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.722790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.732692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.732754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.732767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.732777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.732783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.732798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.742709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.742780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.742793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.742800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.742806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.742821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.752765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.752831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.752844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.752851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.752857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.752872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.762823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.762891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.762904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.762911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.762917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.762933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.772808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.772873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.772886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.772894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.772900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.772917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.782760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.782829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.782842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.782850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.782856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.782871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.792868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.792933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.792945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.792952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.792958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.792973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.038 [2024-12-15 13:16:12.802899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.038 [2024-12-15 13:16:12.802960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.038 [2024-12-15 13:16:12.802973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.038 [2024-12-15 13:16:12.802980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.038 [2024-12-15 13:16:12.802987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.038 [2024-12-15 13:16:12.803002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.038 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.812919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.812982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.812994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.813001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.813007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.813022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.822987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.823080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.823093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.823100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.823106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.823120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.832973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.833038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.833051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.833059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.833065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.833079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.842997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.843046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.843059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.843066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.843072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.843087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.853047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.853122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.853135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.853142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.853148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.853162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.863070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.863125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.863138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.863151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.863157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.863172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.873089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.873157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.873170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.873176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.873183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.873197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.883122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.883187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.883200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.883208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.883214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.883229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.893180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.893240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.893253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.893260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.893267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.893280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.903184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.903251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.903264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.903272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.903278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.903295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.913207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.913274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.913287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.913294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.913300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.913314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.923231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.923290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.923303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.923310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.923316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.923332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.039 [2024-12-15 13:16:12.933265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.039 [2024-12-15 13:16:12.933316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.039 [2024-12-15 13:16:12.933330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.039 [2024-12-15 13:16:12.933337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.039 [2024-12-15 13:16:12.933343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.039 [2024-12-15 13:16:12.933358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.039 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.943316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.943381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.943394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.943402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.943408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.943423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.953325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.953379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.953391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.953398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.953405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.953419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.963341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.963398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.963411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.963419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.963425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.963439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.973358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.973409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.973422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.973428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.973434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.973449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.983406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.983473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.983485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.983492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.983499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.983513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:12.993461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:12.993540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:12.993557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:12.993564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:12.993570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:12.993584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:13.003465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:13.003528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:13.003541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:13.003549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:13.003555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:13.003569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:13.013493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:13.013567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:13.013580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:13.013587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:13.013593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:13.013607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:13.023530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:13.023613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:13.023627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:13.023634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:13.023640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:13.023655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:13.033556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.300 [2024-12-15 13:16:13.033622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.300 [2024-12-15 13:16:13.033636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.300 [2024-12-15 13:16:13.033643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.300 [2024-12-15 13:16:13.033652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.300 [2024-12-15 13:16:13.033667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.300 qpair failed and we were unable to recover it. 00:36:05.300 [2024-12-15 13:16:13.043607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.043671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.043684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.043691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.043697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.043711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.053603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.053653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.053666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.053673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.053680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.053695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.063635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.063716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.063730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.063737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.063743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.063758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.073636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.073698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.073710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.073717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.073723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.073738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.083707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.083774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.083787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.083794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.083800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.083815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.093713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.093779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.093792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.093799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.093805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.093820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.103741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.103806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.103819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.103830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.103838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.103853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.113793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.113866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.113880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.113888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.113895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.113910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.123781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.123841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.123859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.123866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.123873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.123887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.133832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.133885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.133899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.133905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.133911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.133926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.143868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.143935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.143948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.143955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.143961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.143976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.153889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.153954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.153967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.153974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.153980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.153995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.163913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.163976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.163989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.301 [2024-12-15 13:16:13.163996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.301 [2024-12-15 13:16:13.164004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.301 [2024-12-15 13:16:13.164020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.301 qpair failed and we were unable to recover it. 00:36:05.301 [2024-12-15 13:16:13.173929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.301 [2024-12-15 13:16:13.173993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.301 [2024-12-15 13:16:13.174005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.302 [2024-12-15 13:16:13.174012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.302 [2024-12-15 13:16:13.174018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.302 [2024-12-15 13:16:13.174033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-12-15 13:16:13.184001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.302 [2024-12-15 13:16:13.184067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.302 [2024-12-15 13:16:13.184080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.302 [2024-12-15 13:16:13.184087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.302 [2024-12-15 13:16:13.184093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.302 [2024-12-15 13:16:13.184108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-12-15 13:16:13.193985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.302 [2024-12-15 13:16:13.194041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.302 [2024-12-15 13:16:13.194053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.302 [2024-12-15 13:16:13.194060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.302 [2024-12-15 13:16:13.194066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.302 [2024-12-15 13:16:13.194080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.302 [2024-12-15 13:16:13.204036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.302 [2024-12-15 13:16:13.204111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.302 [2024-12-15 13:16:13.204124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.302 [2024-12-15 13:16:13.204130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.302 [2024-12-15 13:16:13.204137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.302 [2024-12-15 13:16:13.204151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.302 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.214054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.214109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.214122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.214129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.214136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.214150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.224094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.224148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.224160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.224167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.224173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.224189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.234116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.234179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.234191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.234198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.234204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.234219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.244168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.244249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.244261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.244268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.244274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.244288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.254151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.254213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.254226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.254232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.254239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.254253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.264193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.264250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.264264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.264270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.264276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.264291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.274157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.274215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.274227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.274234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.562 [2024-12-15 13:16:13.274240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.562 [2024-12-15 13:16:13.274254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.562 qpair failed and we were unable to recover it. 00:36:05.562 [2024-12-15 13:16:13.284210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.562 [2024-12-15 13:16:13.284281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.562 [2024-12-15 13:16:13.284296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.562 [2024-12-15 13:16:13.284304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.284310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.284325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.294263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.294320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.294333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.294343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.294349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.294363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.304278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.304338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.304352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.304358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.304365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.304380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.314352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.314405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.314418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.314424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.314431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.314446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.324277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.324346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.324360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.324367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.324373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.324387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.334309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.334365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.334379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.334386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.334392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.334410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.344429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.344497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.344510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.344517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.344523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.344538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.354485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.354539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.354552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.354558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.354565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.354579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.364478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.364530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.364542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.364549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.364555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.364570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.374513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.374563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.374576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.374582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.374588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.374604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.384460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.384521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.384533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.384540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.384546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.384561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.394538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.394596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.394609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.394616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.394622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.394638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.404578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.404641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.404654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.404661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.404667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.404681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.563 [2024-12-15 13:16:13.414612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.563 [2024-12-15 13:16:13.414664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.563 [2024-12-15 13:16:13.414676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.563 [2024-12-15 13:16:13.414683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.563 [2024-12-15 13:16:13.414689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.563 [2024-12-15 13:16:13.414704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.563 qpair failed and we were unable to recover it. 00:36:05.564 [2024-12-15 13:16:13.424659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.564 [2024-12-15 13:16:13.424713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.564 [2024-12-15 13:16:13.424728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.564 [2024-12-15 13:16:13.424735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.564 [2024-12-15 13:16:13.424741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.564 [2024-12-15 13:16:13.424756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.564 qpair failed and we were unable to recover it. 00:36:05.564 [2024-12-15 13:16:13.434687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.564 [2024-12-15 13:16:13.434750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.564 [2024-12-15 13:16:13.434763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.564 [2024-12-15 13:16:13.434771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.564 [2024-12-15 13:16:13.434777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.564 [2024-12-15 13:16:13.434792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.564 qpair failed and we were unable to recover it. 00:36:05.564 [2024-12-15 13:16:13.444640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.564 [2024-12-15 13:16:13.444698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.564 [2024-12-15 13:16:13.444711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.564 [2024-12-15 13:16:13.444718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.564 [2024-12-15 13:16:13.444725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.564 [2024-12-15 13:16:13.444740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.564 qpair failed and we were unable to recover it. 00:36:05.564 [2024-12-15 13:16:13.454670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.564 [2024-12-15 13:16:13.454730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.564 [2024-12-15 13:16:13.454743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.564 [2024-12-15 13:16:13.454750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.564 [2024-12-15 13:16:13.454756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.564 [2024-12-15 13:16:13.454771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.564 qpair failed and we were unable to recover it. 00:36:05.564 [2024-12-15 13:16:13.464750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.564 [2024-12-15 13:16:13.464812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.564 [2024-12-15 13:16:13.464830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.564 [2024-12-15 13:16:13.464837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.564 [2024-12-15 13:16:13.464844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.564 [2024-12-15 13:16:13.464863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.564 qpair failed and we were unable to recover it. 00:36:05.827 [2024-12-15 13:16:13.474773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.827 [2024-12-15 13:16:13.474849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.827 [2024-12-15 13:16:13.474862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.827 [2024-12-15 13:16:13.474869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.827 [2024-12-15 13:16:13.474876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.827 [2024-12-15 13:16:13.474892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.827 qpair failed and we were unable to recover it. 00:36:05.827 [2024-12-15 13:16:13.484820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.827 [2024-12-15 13:16:13.484918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.827 [2024-12-15 13:16:13.484931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.827 [2024-12-15 13:16:13.484938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.827 [2024-12-15 13:16:13.484944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.484958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.494774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.494828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.494841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.494848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.494854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.494869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.504927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.505014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.505027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.505034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.505040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.505056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.514908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.514967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.514981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.514987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.514994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.515009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.524924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.525020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.525033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.525040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.525047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.525062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.534954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.535013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.535026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.535033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.535040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.535055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.545033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.545088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.545102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.545109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.545115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.545129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.554997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.555056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.555072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.555079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.555086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.555101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.565055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.565112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.565126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.565134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.565140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.565155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.575079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.575132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.575145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.575152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.575159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.575175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.585075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.585131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.585144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.585152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.585158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.585174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.595078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.595137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.828 [2024-12-15 13:16:13.595150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.828 [2024-12-15 13:16:13.595157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.828 [2024-12-15 13:16:13.595170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.828 [2024-12-15 13:16:13.595184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.828 qpair failed and we were unable to recover it. 00:36:05.828 [2024-12-15 13:16:13.605136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.828 [2024-12-15 13:16:13.605190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.605203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.605210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.605216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.605231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.615250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.615302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.615316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.615323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.615329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.615343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.625214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.625296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.625309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.625316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.625322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.625337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.635244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.635302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.635315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.635323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.635330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.635345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.645229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.645282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.645295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.645302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.645309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.645325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.655282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.655384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.655397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.655404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.655410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.655426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.665390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.665456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.665469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.665475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.665482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.665496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.675369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.675427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.675439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.675446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.675452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.675466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.685434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.685490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.685506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.685513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.685520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.685535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.695441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.695491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.695504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.695511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.695517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.695532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.705546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.705602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.829 [2024-12-15 13:16:13.705615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.829 [2024-12-15 13:16:13.705621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.829 [2024-12-15 13:16:13.705628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.829 [2024-12-15 13:16:13.705643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.829 qpair failed and we were unable to recover it. 00:36:05.829 [2024-12-15 13:16:13.715502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.829 [2024-12-15 13:16:13.715553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.830 [2024-12-15 13:16:13.715566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.830 [2024-12-15 13:16:13.715573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.830 [2024-12-15 13:16:13.715579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.830 [2024-12-15 13:16:13.715594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.830 qpair failed and we were unable to recover it. 00:36:05.830 [2024-12-15 13:16:13.725513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.830 [2024-12-15 13:16:13.725570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.830 [2024-12-15 13:16:13.725583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.830 [2024-12-15 13:16:13.725592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.830 [2024-12-15 13:16:13.725598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:05.830 [2024-12-15 13:16:13.725613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.830 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.735554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.735610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.735623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.735631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.735637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.735651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.745589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.745644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.745657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.745664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.745671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.745685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.755601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.755657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.755671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.755678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.755684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.755698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.765624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.765678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.765691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.765698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.765704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.765719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.775687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.775740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.775753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.775760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.775767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.775781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.785737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.785845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.785859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.785865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.785872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.785887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.795711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.795766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.795779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.795786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.795792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.795807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.805768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.805836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.805850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.805857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.805863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.805879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.815750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.815807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.815820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.815831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.815837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.815852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.825799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.825863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.825877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.825884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.825890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.825906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.835817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.835887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.835900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.835907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.835913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.835928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.092 qpair failed and we were unable to recover it. 00:36:06.092 [2024-12-15 13:16:13.845841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.092 [2024-12-15 13:16:13.845893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.092 [2024-12-15 13:16:13.845907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.092 [2024-12-15 13:16:13.845913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.092 [2024-12-15 13:16:13.845920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.092 [2024-12-15 13:16:13.845935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.855889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.855946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.855959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.855969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.855976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.855991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.865895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.865951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.865964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.865971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.865977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.865992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.875907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.875969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.875982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.875989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.875995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.876010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.885984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.886067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.886080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.886087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.886093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.886108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.895964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.896018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.896031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.896038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.896044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.896062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.906049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.906104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.906116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.906123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.906130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.906145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.916037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.916091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.916104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.916111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.916118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.916133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.926107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.926167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.926180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.926187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.926193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.926208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.936083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.936137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.936149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.936156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.936162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.936178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.946127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.946192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.946205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.946212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.946218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.946233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.956122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.956180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.093 [2024-12-15 13:16:13.956193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.093 [2024-12-15 13:16:13.956200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.093 [2024-12-15 13:16:13.956207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.093 [2024-12-15 13:16:13.956222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.093 qpair failed and we were unable to recover it. 00:36:06.093 [2024-12-15 13:16:13.966166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.093 [2024-12-15 13:16:13.966223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.094 [2024-12-15 13:16:13.966236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.094 [2024-12-15 13:16:13.966243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.094 [2024-12-15 13:16:13.966250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.094 [2024-12-15 13:16:13.966264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.094 qpair failed and we were unable to recover it. 00:36:06.094 [2024-12-15 13:16:13.976193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.094 [2024-12-15 13:16:13.976246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.094 [2024-12-15 13:16:13.976259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.094 [2024-12-15 13:16:13.976266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.094 [2024-12-15 13:16:13.976272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.094 [2024-12-15 13:16:13.976287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.094 qpair failed and we were unable to recover it. 00:36:06.094 [2024-12-15 13:16:13.986227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.094 [2024-12-15 13:16:13.986285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.094 [2024-12-15 13:16:13.986301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.094 [2024-12-15 13:16:13.986309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.094 [2024-12-15 13:16:13.986315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.094 [2024-12-15 13:16:13.986330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.094 qpair failed and we were unable to recover it. 00:36:06.094 [2024-12-15 13:16:13.996255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.094 [2024-12-15 13:16:13.996312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.094 [2024-12-15 13:16:13.996325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.094 [2024-12-15 13:16:13.996331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.094 [2024-12-15 13:16:13.996338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.094 [2024-12-15 13:16:13.996352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.094 qpair failed and we were unable to recover it. 00:36:06.354 [2024-12-15 13:16:14.006309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.354 [2024-12-15 13:16:14.006377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.354 [2024-12-15 13:16:14.006390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.354 [2024-12-15 13:16:14.006397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.354 [2024-12-15 13:16:14.006404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.354 [2024-12-15 13:16:14.006418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.354 qpair failed and we were unable to recover it. 00:36:06.354 [2024-12-15 13:16:14.016311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.354 [2024-12-15 13:16:14.016367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.354 [2024-12-15 13:16:14.016380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.354 [2024-12-15 13:16:14.016387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.016393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.016408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.026354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.026429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.026442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.026449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.026455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.026474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.036375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.036438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.036451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.036458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.036464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.036478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.046393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.046490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.046504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.046511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.046517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.046531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.056399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.056463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.056476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.056483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.056489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.056504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.066452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.066512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.066524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.066531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.066538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.066553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.076489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.076541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.076554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.076561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.076566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.076582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.086508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.086582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.086595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.086602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.086608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.086623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.096461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.096511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.096524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.096531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.096538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.096553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.106579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.106633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.106646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.106652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.106659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.106673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.116597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.116650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.116666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.116673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.116679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.116695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.126625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.126677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.126690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.126697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.126704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.126718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.136639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.136697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.355 [2024-12-15 13:16:14.136710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.355 [2024-12-15 13:16:14.136717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.355 [2024-12-15 13:16:14.136723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.355 [2024-12-15 13:16:14.136737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.355 qpair failed and we were unable to recover it. 00:36:06.355 [2024-12-15 13:16:14.146689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.355 [2024-12-15 13:16:14.146753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.146766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.146773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.146779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.146793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.156721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.156801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.156815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.156821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.156834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.156849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.166761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.166855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.166867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.166874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.166881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.166895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.176807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.176870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.176884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.176891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.176897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.176911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.186899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.186972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.186986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.186993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.186999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.187014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.196844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.196900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.196913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.196919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.196926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.196941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.206917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.206975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.206988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.206996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.207002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.207017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.216917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.216976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.216989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.216996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.217002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.217017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.226930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.226985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.226999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.227005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.227012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.227026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.236966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.237031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.237043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.237050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.237056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.237071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.246979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.247036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.247052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.247059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.247065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.247080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.356 [2024-12-15 13:16:14.257017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.356 [2024-12-15 13:16:14.257071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.356 [2024-12-15 13:16:14.257084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.356 [2024-12-15 13:16:14.257091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.356 [2024-12-15 13:16:14.257097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.356 [2024-12-15 13:16:14.257111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.356 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.267049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.267122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.267136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.267143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.267149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.267164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.277078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.277135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.277148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.277155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.277162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.277176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.287130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.287189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.287202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.287212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.287219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.287233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.297159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.297224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.297236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.297243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.297249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.297263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.307212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.307281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.307294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.307301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.307307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.307322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.617 [2024-12-15 13:16:14.317189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.617 [2024-12-15 13:16:14.317248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.617 [2024-12-15 13:16:14.317261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.617 [2024-12-15 13:16:14.317268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.617 [2024-12-15 13:16:14.317275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.617 [2024-12-15 13:16:14.317289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.617 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.327212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.327263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.327276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.327283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.327290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.327305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.337238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.337295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.337309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.337316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.337323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.337337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.347272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.347328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.347342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.347348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.347355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.347369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.357299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.357377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.357391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.357398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.357404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.357418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.367347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.367406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.367424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.367433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.367439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.367458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.377345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.377407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.377420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.377428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.377434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.377449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.387388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.387444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.387458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.387465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.387471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.387487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.397417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.397488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.397502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.397508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.397514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.397530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.407510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.407565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.407578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.407585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.407591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.407606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.417505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.417566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.417579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.417589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.417595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.417610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.427566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.427620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.427633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.427640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.427647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.427662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.437528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.437588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.437601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.437609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.437616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.618 [2024-12-15 13:16:14.437631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.618 qpair failed and we were unable to recover it. 00:36:06.618 [2024-12-15 13:16:14.447550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.618 [2024-12-15 13:16:14.447608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.618 [2024-12-15 13:16:14.447620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.618 [2024-12-15 13:16:14.447627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.618 [2024-12-15 13:16:14.447634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.447648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.457572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.457631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.457644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.457651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.457657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.457675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.467613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.467669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.467682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.467689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.467695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.467709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.477686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.477740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.477753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.477759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.477767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.477781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.487662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.487717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.487731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.487737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.487744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.487758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.497701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.497756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.497768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.497775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.497781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.497795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.507754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.507812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.507829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.507836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.507842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.507857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.619 [2024-12-15 13:16:14.517781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.619 [2024-12-15 13:16:14.517837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.619 [2024-12-15 13:16:14.517850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.619 [2024-12-15 13:16:14.517857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.619 [2024-12-15 13:16:14.517863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.619 [2024-12-15 13:16:14.517878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.619 qpair failed and we were unable to recover it. 00:36:06.880 [2024-12-15 13:16:14.527793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.880 [2024-12-15 13:16:14.527853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.880 [2024-12-15 13:16:14.527867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.880 [2024-12-15 13:16:14.527874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.880 [2024-12-15 13:16:14.527881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.880 [2024-12-15 13:16:14.527896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.880 qpair failed and we were unable to recover it. 00:36:06.880 [2024-12-15 13:16:14.537847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.880 [2024-12-15 13:16:14.537901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.880 [2024-12-15 13:16:14.537914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.880 [2024-12-15 13:16:14.537921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.880 [2024-12-15 13:16:14.537927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.880 [2024-12-15 13:16:14.537942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.880 qpair failed and we were unable to recover it. 00:36:06.880 [2024-12-15 13:16:14.547860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.880 [2024-12-15 13:16:14.547915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.880 [2024-12-15 13:16:14.547930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.880 [2024-12-15 13:16:14.547937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.880 [2024-12-15 13:16:14.547944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.880 [2024-12-15 13:16:14.547958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.880 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.557897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.557967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.557980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.557987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.557993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.558008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.567921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.568012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.568026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.568032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.568039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.568054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.577924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.577975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.577988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.577995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.578001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.578016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.587992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.588099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.588112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.588119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.588128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.588143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.598013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.598071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.598083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.598090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.598097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.598111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.608005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.608057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.608070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.608077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.608083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.608097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.618041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.618092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.618104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.618111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.618118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.618133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.628014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.628102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.628116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.628123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.628129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.628144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.638099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.638178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.638192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.638199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.638206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.638221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.648059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.648147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.648161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.648168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.648174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.648189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.658170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.658226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.658241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.658248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.658254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.658269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.668237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.668306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.668319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.668326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.668332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.668346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.678142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.881 [2024-12-15 13:16:14.678239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.881 [2024-12-15 13:16:14.678255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.881 [2024-12-15 13:16:14.678262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.881 [2024-12-15 13:16:14.678268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.881 [2024-12-15 13:16:14.678282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.881 qpair failed and we were unable to recover it. 00:36:06.881 [2024-12-15 13:16:14.688261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.688358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.688372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.688378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.688384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.688400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.698266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.698315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.698329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.698336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.698342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.698357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.708292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.708348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.708362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.708370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.708376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.708391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.718313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.718371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.718384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.718391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.718401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.718415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.728298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.728354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.728368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.728374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.728381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.728396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.738394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.738450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.738463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.738470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.738476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.738491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.748409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.748465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.748478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.748485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.748491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.748505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.758395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.758492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.758505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.758512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.758518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.758532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.768453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.768518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.768531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.768538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.768544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.768558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:06.882 [2024-12-15 13:16:14.778518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.882 [2024-12-15 13:16:14.778594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.882 [2024-12-15 13:16:14.778608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.882 [2024-12-15 13:16:14.778615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.882 [2024-12-15 13:16:14.778621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:06.882 [2024-12-15 13:16:14.778635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:06.882 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.788488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.788563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.788576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.788583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.788590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.788604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.798504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.798562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.798576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.798583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.798589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.798604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.808598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.808656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.808673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.808679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.808686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.808701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.818611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.818665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.818678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.818684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.818691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.818705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.828609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.828678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.828692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.828698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.828704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.828719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.838711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.838797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.838810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.838817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.838827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbae4000b90 00:36:07.142 [2024-12-15 13:16:14.838843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.848733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.142 [2024-12-15 13:16:14.848847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.142 [2024-12-15 13:16:14.848904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.142 [2024-12-15 13:16:14.848938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.142 [2024-12-15 13:16:14.848959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f9cd0 00:36:07.142 [2024-12-15 13:16:14.849009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:07.142 qpair failed and we were unable to recover it. 00:36:07.142 [2024-12-15 13:16:14.849055] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:07.142 A controller has encountered a failure and is being reset. 00:36:07.142 Controller properly reset. 00:36:07.143 Initializing NVMe Controllers 00:36:07.143 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:07.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:07.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:07.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:07.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:07.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:07.143 Initialization complete. Launching workers. 00:36:07.143 Starting thread on core 1 00:36:07.143 Starting thread on core 2 00:36:07.143 Starting thread on core 3 00:36:07.143 Starting thread on core 0 00:36:07.143 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:07.143 00:36:07.143 real 0m10.838s 00:36:07.143 user 0m19.409s 00:36:07.143 sys 0m4.720s 00:36:07.143 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.143 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.143 ************************************ 00:36:07.143 END TEST nvmf_target_disconnect_tc2 00:36:07.143 ************************************ 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:07.402 rmmod nvme_tcp 00:36:07.402 rmmod nvme_fabrics 00:36:07.402 rmmod nvme_keyring 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1208731 ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1208731 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1208731 ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1208731 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208731 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208731' 00:36:07.402 killing process with pid 1208731 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1208731 00:36:07.402 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1208731 00:36:07.661 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.662 13:16:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.566 13:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:09.566 00:36:09.566 real 0m19.581s 00:36:09.566 user 0m47.460s 00:36:09.566 sys 0m9.588s 00:36:09.566 13:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.566 13:16:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:09.566 ************************************ 00:36:09.566 END TEST nvmf_target_disconnect 00:36:09.566 ************************************ 00:36:09.825 13:16:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:09.825 00:36:09.825 real 7m21.302s 00:36:09.825 user 16m50.250s 00:36:09.825 sys 2m8.122s 00:36:09.825 13:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.825 13:16:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.825 ************************************ 00:36:09.825 END TEST nvmf_host 00:36:09.825 ************************************ 00:36:09.825 13:16:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:09.825 13:16:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:09.825 13:16:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:09.825 13:16:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:09.825 13:16:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.825 13:16:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.825 ************************************ 00:36:09.825 START TEST nvmf_target_core_interrupt_mode 00:36:09.825 ************************************ 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:09.825 * Looking for test storage... 00:36:09.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.825 --rc genhtml_branch_coverage=1 00:36:09.825 --rc genhtml_function_coverage=1 00:36:09.825 --rc genhtml_legend=1 00:36:09.825 --rc geninfo_all_blocks=1 00:36:09.825 --rc geninfo_unexecuted_blocks=1 00:36:09.825 00:36:09.825 ' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.825 --rc genhtml_branch_coverage=1 00:36:09.825 --rc genhtml_function_coverage=1 00:36:09.825 --rc genhtml_legend=1 00:36:09.825 --rc geninfo_all_blocks=1 00:36:09.825 --rc geninfo_unexecuted_blocks=1 00:36:09.825 00:36:09.825 ' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.825 --rc genhtml_branch_coverage=1 00:36:09.825 --rc genhtml_function_coverage=1 00:36:09.825 --rc genhtml_legend=1 00:36:09.825 --rc geninfo_all_blocks=1 00:36:09.825 --rc geninfo_unexecuted_blocks=1 00:36:09.825 00:36:09.825 ' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.825 --rc genhtml_branch_coverage=1 00:36:09.825 --rc genhtml_function_coverage=1 00:36:09.825 --rc genhtml_legend=1 00:36:09.825 --rc geninfo_all_blocks=1 00:36:09.825 --rc geninfo_unexecuted_blocks=1 00:36:09.825 00:36:09.825 ' 00:36:09.825 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.085 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:10.086 ************************************ 00:36:10.086 START TEST nvmf_abort 00:36:10.086 ************************************ 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:10.086 * Looking for test storage... 00:36:10.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:10.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.086 --rc genhtml_branch_coverage=1 00:36:10.086 --rc genhtml_function_coverage=1 00:36:10.086 --rc genhtml_legend=1 00:36:10.086 --rc geninfo_all_blocks=1 00:36:10.086 --rc geninfo_unexecuted_blocks=1 00:36:10.086 00:36:10.086 ' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:10.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.086 --rc genhtml_branch_coverage=1 00:36:10.086 --rc genhtml_function_coverage=1 00:36:10.086 --rc genhtml_legend=1 00:36:10.086 --rc geninfo_all_blocks=1 00:36:10.086 --rc geninfo_unexecuted_blocks=1 00:36:10.086 00:36:10.086 ' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:10.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.086 --rc genhtml_branch_coverage=1 00:36:10.086 --rc genhtml_function_coverage=1 00:36:10.086 --rc genhtml_legend=1 00:36:10.086 --rc geninfo_all_blocks=1 00:36:10.086 --rc geninfo_unexecuted_blocks=1 00:36:10.086 00:36:10.086 ' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:10.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.086 --rc genhtml_branch_coverage=1 00:36:10.086 --rc genhtml_function_coverage=1 00:36:10.086 --rc genhtml_legend=1 00:36:10.086 --rc geninfo_all_blocks=1 00:36:10.086 --rc geninfo_unexecuted_blocks=1 00:36:10.086 00:36:10.086 ' 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.086 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.346 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:10.346 13:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:15.698 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:15.698 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:15.698 Found net devices under 0000:af:00.0: cvl_0_0 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:15.698 Found net devices under 0000:af:00.1: cvl_0_1 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.698 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:36:15.959 00:36:15.959 --- 10.0.0.2 ping statistics --- 00:36:15.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.959 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:36:15.959 00:36:15.959 --- 10.0.0.1 ping statistics --- 00:36:15.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.959 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1213407 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1213407 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1213407 ']' 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.959 13:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.218 [2024-12-15 13:16:23.906653] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:16.218 [2024-12-15 13:16:23.907601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:16.218 [2024-12-15 13:16:23.907638] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.218 [2024-12-15 13:16:23.987793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:16.218 [2024-12-15 13:16:24.010445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.218 [2024-12-15 13:16:24.010485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.219 [2024-12-15 13:16:24.010492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.219 [2024-12-15 13:16:24.010501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.219 [2024-12-15 13:16:24.010506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.219 [2024-12-15 13:16:24.011733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:16.219 [2024-12-15 13:16:24.011882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.219 [2024-12-15 13:16:24.011882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:16.219 [2024-12-15 13:16:24.074535] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:16.219 [2024-12-15 13:16:24.075360] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:16.219 [2024-12-15 13:16:24.075774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:16.219 [2024-12-15 13:16:24.075872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:16.219 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.219 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:16.219 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:16.219 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:16.219 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 [2024-12-15 13:16:24.144657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 Malloc0 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 Delay0 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 [2024-12-15 13:16:24.232596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.478 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:16.478 [2024-12-15 13:16:24.318357] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:19.014 Initializing NVMe Controllers 00:36:19.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:19.014 controller IO queue size 128 less than required 00:36:19.014 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:19.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:19.014 Initialization complete. Launching workers. 00:36:19.014 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37927 00:36:19.014 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37984, failed to submit 66 00:36:19.014 success 37927, unsuccessful 57, failed 0 00:36:19.014 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.014 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.014 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.014 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.014 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.015 rmmod nvme_tcp 00:36:19.015 rmmod nvme_fabrics 00:36:19.015 rmmod nvme_keyring 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1213407 ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1213407 ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213407' 00:36:19.015 killing process with pid 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1213407 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.015 13:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.917 00:36:20.917 real 0m10.966s 00:36:20.917 user 0m10.198s 00:36:20.917 sys 0m5.532s 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.917 ************************************ 00:36:20.917 END TEST nvmf_abort 00:36:20.917 ************************************ 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.917 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:21.177 ************************************ 00:36:21.177 START TEST nvmf_ns_hotplug_stress 00:36:21.177 ************************************ 00:36:21.177 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:21.177 * Looking for test storage... 00:36:21.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.177 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.177 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.177 13:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.177 --rc genhtml_branch_coverage=1 00:36:21.177 --rc genhtml_function_coverage=1 00:36:21.177 --rc genhtml_legend=1 00:36:21.177 --rc geninfo_all_blocks=1 00:36:21.177 --rc geninfo_unexecuted_blocks=1 00:36:21.177 00:36:21.177 ' 00:36:21.177 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.177 --rc genhtml_branch_coverage=1 00:36:21.177 --rc genhtml_function_coverage=1 00:36:21.178 --rc genhtml_legend=1 00:36:21.178 --rc geninfo_all_blocks=1 00:36:21.178 --rc geninfo_unexecuted_blocks=1 00:36:21.178 00:36:21.178 ' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.178 --rc genhtml_branch_coverage=1 00:36:21.178 --rc genhtml_function_coverage=1 00:36:21.178 --rc genhtml_legend=1 00:36:21.178 --rc geninfo_all_blocks=1 00:36:21.178 --rc geninfo_unexecuted_blocks=1 00:36:21.178 00:36:21.178 ' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.178 --rc genhtml_branch_coverage=1 00:36:21.178 --rc genhtml_function_coverage=1 00:36:21.178 --rc genhtml_legend=1 00:36:21.178 --rc geninfo_all_blocks=1 00:36:21.178 --rc geninfo_unexecuted_blocks=1 00:36:21.178 00:36:21.178 ' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.178 13:16:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:27.748 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:27.748 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:27.748 Found net devices under 0000:af:00.0: cvl_0_0 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:27.748 Found net devices under 0000:af:00.1: cvl_0_1 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:27.748 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:27.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:36:27.749 00:36:27.749 --- 10.0.0.2 ping statistics --- 00:36:27.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.749 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:36:27.749 00:36:27.749 --- 10.0.0.1 ping statistics --- 00:36:27.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.749 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1217122 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1217122 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1217122 ']' 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.749 13:16:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:27.749 [2024-12-15 13:16:34.936485] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:27.749 [2024-12-15 13:16:34.937368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:27.749 [2024-12-15 13:16:34.937400] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.749 [2024-12-15 13:16:35.017557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:27.749 [2024-12-15 13:16:35.039562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.749 [2024-12-15 13:16:35.039597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.749 [2024-12-15 13:16:35.039604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.749 [2024-12-15 13:16:35.039611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.749 [2024-12-15 13:16:35.039616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.749 [2024-12-15 13:16:35.040804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:27.749 [2024-12-15 13:16:35.040842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.749 [2024-12-15 13:16:35.040842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:27.749 [2024-12-15 13:16:35.103722] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:27.749 [2024-12-15 13:16:35.104617] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:27.749 [2024-12-15 13:16:35.104780] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:27.749 [2024-12-15 13:16:35.104940] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:27.749 [2024-12-15 13:16:35.342123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:27.749 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:28.008 [2024-12-15 13:16:35.746420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.008 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:28.267 13:16:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:28.267 Malloc0 00:36:28.267 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:28.526 Delay0 00:36:28.526 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.785 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:29.044 NULL1 00:36:29.044 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:29.302 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1217585 00:36:29.302 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:29.302 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:29.302 13:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.238 Read completed with error (sct=0, sc=11) 00:36:30.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.238 13:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.497 13:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:30.497 13:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:30.756 true 00:36:30.756 13:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:30.756 13:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.692 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:31.692 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:31.692 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:31.951 true 00:36:31.951 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:31.951 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.210 13:16:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.469 13:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:32.469 13:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:32.469 true 00:36:32.469 13:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:32.469 13:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.847 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.847 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:33.847 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:34.106 true 00:36:34.106 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:34.106 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.042 13:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.042 13:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:35.042 13:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:35.301 true 00:36:35.301 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:35.302 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.302 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.560 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:35.560 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:35.819 true 00:36:35.819 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:35.819 13:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:36.758 13:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:37.017 13:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:37.017 13:16:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:37.275 true 00:36:37.275 13:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:37.275 13:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.216 13:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.216 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:38.216 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:38.474 true 00:36:38.474 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:38.474 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.732 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.991 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:38.991 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:38.991 true 00:36:39.249 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:39.249 13:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.186 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.444 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:40.444 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:40.444 true 00:36:40.444 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:40.444 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.702 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.960 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:40.960 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:41.218 true 00:36:41.218 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:41.218 13:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:42.152 13:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.410 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:42.410 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:42.410 true 00:36:42.668 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:42.668 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.668 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.926 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:42.926 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:43.184 true 00:36:43.184 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:43.184 13:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.560 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.560 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:44.560 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:44.560 true 00:36:44.819 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:44.819 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.819 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.078 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:45.078 13:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:45.336 true 00:36:45.337 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:45.337 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.712 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:46.712 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:46.712 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:46.712 true 00:36:46.970 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:46.970 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.970 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.228 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:47.228 13:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:47.485 true 00:36:47.485 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:47.485 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:48.859 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:48.859 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:49.117 true 00:36:49.117 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:49.117 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.051 13:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.051 13:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:50.052 13:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:50.310 true 00:36:50.310 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:50.310 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.569 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.569 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:50.569 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:50.828 true 00:36:50.828 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:50.828 13:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.202 13:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.203 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.203 13:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:52.203 13:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:52.461 true 00:36:52.461 13:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:52.461 13:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.396 13:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.396 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:53.396 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:53.653 true 00:36:53.653 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:53.653 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.912 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.912 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:53.912 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:54.170 true 00:36:54.170 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:54.170 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.105 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.105 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:55.363 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:55.363 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:55.621 true 00:36:55.621 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:55.621 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.879 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.138 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:56.138 13:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:56.138 true 00:36:56.138 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:56.138 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 13:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:57.512 13:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:57.512 13:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:57.769 true 00:36:57.769 13:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:57.769 13:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.703 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.704 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:58.704 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:58.962 true 00:36:58.962 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:58.962 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.220 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.479 Initializing NVMe Controllers 00:36:59.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:59.479 Controller IO queue size 128, less than required. 00:36:59.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:59.479 Controller IO queue size 128, less than required. 00:36:59.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:59.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:59.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:59.479 Initialization complete. Launching workers. 00:36:59.479 ======================================================== 00:36:59.479 Latency(us) 00:36:59.479 Device Information : IOPS MiB/s Average min max 00:36:59.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1770.16 0.86 46915.41 2580.41 1022490.82 00:36:59.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17345.12 8.47 7379.10 2001.53 296606.72 00:36:59.479 ======================================================== 00:36:59.479 Total : 19115.29 9.33 11040.34 2001.53 1022490.82 00:36:59.479 00:36:59.479 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:59.479 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:59.479 true 00:36:59.479 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217585 00:36:59.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1217585) - No such process 00:36:59.479 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1217585 00:36:59.479 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.737 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.996 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:59.996 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:59.996 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:59.996 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:59.996 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:00.255 null0 00:37:00.255 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.255 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.255 13:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:00.255 null1 00:37:00.255 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.255 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.255 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:00.514 null2 00:37:00.514 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.514 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.514 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:00.774 null3 00:37:00.774 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:00.774 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:00.774 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:00.774 null4 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:01.033 null5 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.033 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:01.292 null6 00:37:01.292 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.292 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.292 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:01.551 null7 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:01.551 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1222555 1222557 1222558 1222560 1222562 1222564 1222566 1222568 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:01.552 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:01.810 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.810 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:01.811 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.069 13:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.328 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.587 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:02.846 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.105 13:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.363 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:03.622 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:03.880 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.881 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.139 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.140 13:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.398 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.399 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.658 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.920 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.224 13:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:05.590 rmmod nvme_tcp 00:37:05.590 rmmod nvme_fabrics 00:37:05.590 rmmod nvme_keyring 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1217122 ']' 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1217122 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1217122 ']' 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1217122 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:05.590 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217122 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217122' 00:37:05.849 killing process with pid 1217122 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1217122 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1217122 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.849 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:08.386 00:37:08.386 real 0m46.915s 00:37:08.386 user 2m55.842s 00:37:08.386 sys 0m19.074s 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.386 ************************************ 00:37:08.386 END TEST nvmf_ns_hotplug_stress 00:37:08.386 ************************************ 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:08.386 ************************************ 00:37:08.386 START TEST nvmf_delete_subsystem 00:37:08.386 ************************************ 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:08.386 * Looking for test storage... 00:37:08.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.386 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:08.386 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:08.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.387 --rc genhtml_branch_coverage=1 00:37:08.387 --rc genhtml_function_coverage=1 00:37:08.387 --rc genhtml_legend=1 00:37:08.387 --rc geninfo_all_blocks=1 00:37:08.387 --rc geninfo_unexecuted_blocks=1 00:37:08.387 00:37:08.387 ' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:08.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.387 --rc genhtml_branch_coverage=1 00:37:08.387 --rc genhtml_function_coverage=1 00:37:08.387 --rc genhtml_legend=1 00:37:08.387 --rc geninfo_all_blocks=1 00:37:08.387 --rc geninfo_unexecuted_blocks=1 00:37:08.387 00:37:08.387 ' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:08.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.387 --rc genhtml_branch_coverage=1 00:37:08.387 --rc genhtml_function_coverage=1 00:37:08.387 --rc genhtml_legend=1 00:37:08.387 --rc geninfo_all_blocks=1 00:37:08.387 --rc geninfo_unexecuted_blocks=1 00:37:08.387 00:37:08.387 ' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:08.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.387 --rc genhtml_branch_coverage=1 00:37:08.387 --rc genhtml_function_coverage=1 00:37:08.387 --rc genhtml_legend=1 00:37:08.387 --rc geninfo_all_blocks=1 00:37:08.387 --rc geninfo_unexecuted_blocks=1 00:37:08.387 00:37:08.387 ' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.387 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.959 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.959 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.959 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.959 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:14.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:14.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:14.960 Found net devices under 0000:af:00.0: cvl_0_0 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:14.960 Found net devices under 0000:af:00.1: cvl_0_1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.960 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:37:14.960 00:37:14.960 --- 10.0.0.2 ping statistics --- 00:37:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.961 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:37:14.961 00:37:14.961 --- 10.0.0.1 ping statistics --- 00:37:14.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.961 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1226860 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1226860 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1226860 ']' 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.961 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 [2024-12-15 13:17:21.961851] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.961 [2024-12-15 13:17:21.962752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:14.961 [2024-12-15 13:17:21.962784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.961 [2024-12-15 13:17:22.038720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:14.961 [2024-12-15 13:17:22.060117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.961 [2024-12-15 13:17:22.060154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.961 [2024-12-15 13:17:22.060164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.961 [2024-12-15 13:17:22.060170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.961 [2024-12-15 13:17:22.060175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.961 [2024-12-15 13:17:22.061283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.961 [2024-12-15 13:17:22.061283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.961 [2024-12-15 13:17:22.124047] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.961 [2024-12-15 13:17:22.124599] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:14.961 [2024-12-15 13:17:22.124785] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 [2024-12-15 13:17:22.190153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 [2024-12-15 13:17:22.222423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 NULL1 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 Delay0 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1226887 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:14.961 13:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:14.961 [2024-12-15 13:17:22.335098] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:16.862 13:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:16.862 13:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.862 13:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Read completed with error (sct=0, sc=8) 00:37:16.862 Write completed with error (sct=0, sc=8) 00:37:16.862 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 [2024-12-15 13:17:24.459248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abf70 is same with the state(6) to be set 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 starting I/O failed: -6 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 [2024-12-15 13:17:24.461940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c3c000c80 is same with the state(6) to be set 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Write completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 Read completed with error (sct=0, sc=8) 00:37:16.863 [2024-12-15 13:17:24.462296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c3c00d4d0 is same with the state(6) to be set 00:37:17.799 [2024-12-15 13:17:25.430967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10aa190 is same with the state(6) to be set 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 [2024-12-15 13:17:25.462505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac400 is same with the state(6) to be set 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 [2024-12-15 13:17:25.462927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ac7c0 is same with the state(6) to be set 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Read completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.799 Write completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 [2024-12-15 13:17:25.465271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c3c00d060 is same with the state(6) to be set 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Read completed with error (sct=0, sc=8) 00:37:17.800 Write completed with error (sct=0, sc=8) 00:37:17.800 [2024-12-15 13:17:25.466027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c3c00d800 is same with the state(6) to be set 00:37:17.800 Initializing NVMe Controllers 00:37:17.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:17.800 Controller IO queue size 128, less than required. 00:37:17.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:17.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:17.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:17.800 Initialization complete. Launching workers. 00:37:17.800 ======================================================== 00:37:17.800 Latency(us) 00:37:17.800 Device Information : IOPS MiB/s Average min max 00:37:17.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.16 0.09 879523.06 291.71 1006512.70 00:37:17.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.26 0.08 934284.31 377.49 1043604.36 00:37:17.800 ======================================================== 00:37:17.800 Total : 332.42 0.16 905100.17 291.71 1043604.36 00:37:17.800 00:37:17.800 [2024-12-15 13:17:25.466607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10aa190 (9): Bad file descriptor 00:37:17.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:17.800 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.800 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:17.800 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226887 00:37:17.800 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226887 00:37:18.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1226887) - No such process 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1226887 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1226887 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1226887 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.368 13:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.368 [2024-12-15 13:17:25.998296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1227553 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:18.368 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.368 [2024-12-15 13:17:26.082690] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:18.627 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:18.627 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:18.627 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.195 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.195 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:19.195 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:19.763 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.763 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:19.763 13:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:20.330 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.330 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:20.330 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:20.898 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.898 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:20.898 13:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:21.156 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:21.156 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:21.156 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:21.416 Initializing NVMe Controllers 00:37:21.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:21.416 Controller IO queue size 128, less than required. 00:37:21.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:21.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:21.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:21.416 Initialization complete. Launching workers. 00:37:21.416 ======================================================== 00:37:21.416 Latency(us) 00:37:21.416 Device Information : IOPS MiB/s Average min max 00:37:21.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002080.18 1000127.32 1005700.46 00:37:21.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003806.05 1000126.96 1041374.06 00:37:21.416 ======================================================== 00:37:21.416 Total : 256.00 0.12 1002943.11 1000126.96 1041374.06 00:37:21.416 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227553 00:37:21.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1227553) - No such process 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1227553 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:21.675 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:21.675 rmmod nvme_tcp 00:37:21.675 rmmod nvme_fabrics 00:37:21.934 rmmod nvme_keyring 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1226860 ']' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1226860 ']' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1226860' 00:37:21.934 killing process with pid 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1226860 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.934 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:24.475 00:37:24.475 real 0m16.067s 00:37:24.475 user 0m26.086s 00:37:24.475 sys 0m6.083s 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.475 ************************************ 00:37:24.475 END TEST nvmf_delete_subsystem 00:37:24.475 ************************************ 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:24.475 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:24.475 ************************************ 00:37:24.476 START TEST nvmf_host_management 00:37:24.476 ************************************ 00:37:24.476 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:24.476 * Looking for test storage... 00:37:24.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.476 --rc genhtml_branch_coverage=1 00:37:24.476 --rc genhtml_function_coverage=1 00:37:24.476 --rc genhtml_legend=1 00:37:24.476 --rc geninfo_all_blocks=1 00:37:24.476 --rc geninfo_unexecuted_blocks=1 00:37:24.476 00:37:24.476 ' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.476 --rc genhtml_branch_coverage=1 00:37:24.476 --rc genhtml_function_coverage=1 00:37:24.476 --rc genhtml_legend=1 00:37:24.476 --rc geninfo_all_blocks=1 00:37:24.476 --rc geninfo_unexecuted_blocks=1 00:37:24.476 00:37:24.476 ' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.476 --rc genhtml_branch_coverage=1 00:37:24.476 --rc genhtml_function_coverage=1 00:37:24.476 --rc genhtml_legend=1 00:37:24.476 --rc geninfo_all_blocks=1 00:37:24.476 --rc geninfo_unexecuted_blocks=1 00:37:24.476 00:37:24.476 ' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:24.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.476 --rc genhtml_branch_coverage=1 00:37:24.476 --rc genhtml_function_coverage=1 00:37:24.476 --rc genhtml_legend=1 00:37:24.476 --rc geninfo_all_blocks=1 00:37:24.476 --rc geninfo_unexecuted_blocks=1 00:37:24.476 00:37:24.476 ' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.476 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:24.477 13:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:31.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:31.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.048 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:31.049 Found net devices under 0000:af:00.0: cvl_0_0 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:31.049 Found net devices under 0000:af:00.1: cvl_0_1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:37:31.049 00:37:31.049 --- 10.0.0.2 ping statistics --- 00:37:31.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.049 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:37:31.049 13:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:37:31.049 00:37:31.049 --- 10.0.0.1 ping statistics --- 00:37:31.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.049 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1231471 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1231471 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1231471 ']' 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.049 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.049 [2024-12-15 13:17:38.111692] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:31.049 [2024-12-15 13:17:38.112571] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:31.049 [2024-12-15 13:17:38.112604] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.049 [2024-12-15 13:17:38.187337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:31.049 [2024-12-15 13:17:38.217630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.049 [2024-12-15 13:17:38.217678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.049 [2024-12-15 13:17:38.217689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.049 [2024-12-15 13:17:38.217699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.049 [2024-12-15 13:17:38.217705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.050 [2024-12-15 13:17:38.219605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:31.050 [2024-12-15 13:17:38.219717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:31.050 [2024-12-15 13:17:38.219834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.050 [2024-12-15 13:17:38.219841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:37:31.050 [2024-12-15 13:17:38.293075] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:31.050 [2024-12-15 13:17:38.293415] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:31.050 [2024-12-15 13:17:38.294187] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:31.050 [2024-12-15 13:17:38.294413] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:31.050 [2024-12-15 13:17:38.294461] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 [2024-12-15 13:17:38.368381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 Malloc0 00:37:31.050 [2024-12-15 13:17:38.460644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1231669 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1231669 /var/tmp/bdevperf.sock 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1231669 ']' 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:31.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:31.050 { 00:37:31.050 "params": { 00:37:31.050 "name": "Nvme$subsystem", 00:37:31.050 "trtype": "$TEST_TRANSPORT", 00:37:31.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:31.050 "adrfam": "ipv4", 00:37:31.050 "trsvcid": "$NVMF_PORT", 00:37:31.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:31.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:31.050 "hdgst": ${hdgst:-false}, 00:37:31.050 "ddgst": ${ddgst:-false} 00:37:31.050 }, 00:37:31.050 "method": "bdev_nvme_attach_controller" 00:37:31.050 } 00:37:31.050 EOF 00:37:31.050 )") 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:31.050 13:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:31.050 "params": { 00:37:31.050 "name": "Nvme0", 00:37:31.050 "trtype": "tcp", 00:37:31.050 "traddr": "10.0.0.2", 00:37:31.050 "adrfam": "ipv4", 00:37:31.050 "trsvcid": "4420", 00:37:31.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.050 "hdgst": false, 00:37:31.050 "ddgst": false 00:37:31.050 }, 00:37:31.050 "method": "bdev_nvme_attach_controller" 00:37:31.050 }' 00:37:31.050 [2024-12-15 13:17:38.558220] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:31.050 [2024-12-15 13:17:38.558273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231669 ] 00:37:31.050 [2024-12-15 13:17:38.634226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.050 [2024-12-15 13:17:38.657276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.310 Running I/O for 10 seconds... 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=88 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 88 -ge 100 ']' 00:37:31.310 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.571 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.571 [2024-12-15 13:17:39.369520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.571 [2024-12-15 13:17:39.369830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.369943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486240 is same with the state(6) to be set 00:37:31.572 [2024-12-15 13:17:39.370012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.572 [2024-12-15 13:17:39.370437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.572 [2024-12-15 13:17:39.370443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.370984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.370993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.573 [2024-12-15 13:17:39.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.573 [2024-12-15 13:17:39.371007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a6f50 is same with the state(6) to be set 00:37:31.573 [2024-12-15 13:17:39.371961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:31.573 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.573 task offset: 98304 on job bdev=Nvme0n1 fails 00:37:31.573 00:37:31.573 Latency(us) 00:37:31.573 [2024-12-15T12:17:39.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.573 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:31.573 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:31.573 Verification LBA range: start 0x0 length 0x400 00:37:31.573 Nvme0n1 : 0.40 1928.41 120.53 160.70 0.00 29804.92 3651.29 26838.55 00:37:31.574 [2024-12-15T12:17:39.481Z] =================================================================================================================== 00:37:31.574 [2024-12-15T12:17:39.481Z] Total : 1928.41 120.53 160.70 0.00 29804.92 3651.29 26838.55 00:37:31.574 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:31.574 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.574 [2024-12-15 13:17:39.374332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:31.574 [2024-12-15 13:17:39.374353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493490 (9): Bad file descriptor 00:37:31.574 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:31.574 [2024-12-15 13:17:39.375327] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:31.574 [2024-12-15 13:17:39.375397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:31.574 [2024-12-15 13:17:39.375419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:31.574 [2024-12-15 13:17:39.375433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:31.574 [2024-12-15 13:17:39.375441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:31.574 [2024-12-15 13:17:39.375448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:31.574 [2024-12-15 13:17:39.375458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2493490 00:37:31.574 [2024-12-15 13:17:39.375475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493490 (9): Bad file descriptor 00:37:31.574 [2024-12-15 13:17:39.375487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:31.574 [2024-12-15 13:17:39.375493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:31.574 [2024-12-15 13:17:39.375501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:31.574 [2024-12-15 13:17:39.375509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:31.574 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.574 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1231669 00:37:32.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1231669) - No such process 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:32.510 { 00:37:32.510 "params": { 00:37:32.510 "name": "Nvme$subsystem", 00:37:32.510 "trtype": "$TEST_TRANSPORT", 00:37:32.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:32.510 "adrfam": "ipv4", 00:37:32.510 "trsvcid": "$NVMF_PORT", 00:37:32.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:32.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:32.510 "hdgst": ${hdgst:-false}, 00:37:32.510 "ddgst": ${ddgst:-false} 00:37:32.510 }, 00:37:32.510 "method": "bdev_nvme_attach_controller" 00:37:32.510 } 00:37:32.510 EOF 00:37:32.510 )") 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:32.510 13:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:32.510 "params": { 00:37:32.510 "name": "Nvme0", 00:37:32.510 "trtype": "tcp", 00:37:32.510 "traddr": "10.0.0.2", 00:37:32.511 "adrfam": "ipv4", 00:37:32.511 "trsvcid": "4420", 00:37:32.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.511 "hdgst": false, 00:37:32.511 "ddgst": false 00:37:32.511 }, 00:37:32.511 "method": "bdev_nvme_attach_controller" 00:37:32.511 }' 00:37:32.769 [2024-12-15 13:17:40.439043] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:32.769 [2024-12-15 13:17:40.439092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231968 ] 00:37:32.769 [2024-12-15 13:17:40.512820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.770 [2024-12-15 13:17:40.535343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.028 Running I/O for 1 seconds... 00:37:33.965 1984.00 IOPS, 124.00 MiB/s 00:37:33.965 Latency(us) 00:37:33.965 [2024-12-15T12:17:41.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.965 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:33.965 Verification LBA range: start 0x0 length 0x400 00:37:33.965 Nvme0n1 : 1.01 2036.57 127.29 0.00 0.00 30937.71 6428.77 26838.55 00:37:33.965 [2024-12-15T12:17:41.872Z] =================================================================================================================== 00:37:33.965 [2024-12-15T12:17:41.872Z] Total : 2036.57 127.29 0.00 0.00 30937.71 6428.77 26838.55 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:34.225 rmmod nvme_tcp 00:37:34.225 rmmod nvme_fabrics 00:37:34.225 rmmod nvme_keyring 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1231471 ']' 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1231471 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1231471 ']' 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1231471 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.225 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231471 00:37:34.225 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:34.225 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:34.225 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231471' 00:37:34.225 killing process with pid 1231471 00:37:34.225 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1231471 00:37:34.225 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1231471 00:37:34.485 [2024-12-15 13:17:42.165867] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.485 13:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.390 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.390 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:36.390 00:37:36.390 real 0m12.296s 00:37:36.390 user 0m17.917s 00:37:36.391 sys 0m6.190s 00:37:36.391 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.391 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:36.391 ************************************ 00:37:36.391 END TEST nvmf_host_management 00:37:36.391 ************************************ 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:36.651 ************************************ 00:37:36.651 START TEST nvmf_lvol 00:37:36.651 ************************************ 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:36.651 * Looking for test storage... 00:37:36.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.651 --rc genhtml_branch_coverage=1 00:37:36.651 --rc genhtml_function_coverage=1 00:37:36.651 --rc genhtml_legend=1 00:37:36.651 --rc geninfo_all_blocks=1 00:37:36.651 --rc geninfo_unexecuted_blocks=1 00:37:36.651 00:37:36.651 ' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.651 --rc genhtml_branch_coverage=1 00:37:36.651 --rc genhtml_function_coverage=1 00:37:36.651 --rc genhtml_legend=1 00:37:36.651 --rc geninfo_all_blocks=1 00:37:36.651 --rc geninfo_unexecuted_blocks=1 00:37:36.651 00:37:36.651 ' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.651 --rc genhtml_branch_coverage=1 00:37:36.651 --rc genhtml_function_coverage=1 00:37:36.651 --rc genhtml_legend=1 00:37:36.651 --rc geninfo_all_blocks=1 00:37:36.651 --rc geninfo_unexecuted_blocks=1 00:37:36.651 00:37:36.651 ' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:36.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.651 --rc genhtml_branch_coverage=1 00:37:36.651 --rc genhtml_function_coverage=1 00:37:36.651 --rc genhtml_legend=1 00:37:36.651 --rc geninfo_all_blocks=1 00:37:36.651 --rc geninfo_unexecuted_blocks=1 00:37:36.651 00:37:36.651 ' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.651 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.652 13:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:43.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.223 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:43.224 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:43.224 Found net devices under 0000:af:00.0: cvl_0_0 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:43.224 Found net devices under 0000:af:00.1: cvl_0_1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:37:43.224 00:37:43.224 --- 10.0.0.2 ping statistics --- 00:37:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.224 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:37:43.224 00:37:43.224 --- 10.0.0.1 ping statistics --- 00:37:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.224 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1235667 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1235667 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1235667 ']' 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:43.224 [2024-12-15 13:17:50.480738] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:43.224 [2024-12-15 13:17:50.481651] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:43.224 [2024-12-15 13:17:50.481683] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.224 [2024-12-15 13:17:50.562090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:43.224 [2024-12-15 13:17:50.584269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.224 [2024-12-15 13:17:50.584304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.224 [2024-12-15 13:17:50.584311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.224 [2024-12-15 13:17:50.584317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.224 [2024-12-15 13:17:50.584321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.224 [2024-12-15 13:17:50.585528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.224 [2024-12-15 13:17:50.585639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.224 [2024-12-15 13:17:50.585640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.224 [2024-12-15 13:17:50.648110] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:43.224 [2024-12-15 13:17:50.648861] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:43.224 [2024-12-15 13:17:50.649161] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:43.224 [2024-12-15 13:17:50.649283] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:43.224 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:43.225 [2024-12-15 13:17:50.878288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.225 13:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:43.484 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:43.484 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:43.484 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:43.484 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:43.743 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:44.002 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=04e8c398-1861-4136-adb3-c086c9995d16 00:37:44.002 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04e8c398-1861-4136-adb3-c086c9995d16 lvol 20 00:37:44.261 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7eec3bee-251b-4975-ba43-a84d47533a21 00:37:44.261 13:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:44.261 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7eec3bee-251b-4975-ba43-a84d47533a21 00:37:44.520 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:44.778 [2024-12-15 13:17:52.470186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.779 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:45.038 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1235934 00:37:45.038 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:45.038 13:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:45.975 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7eec3bee-251b-4975-ba43-a84d47533a21 MY_SNAPSHOT 00:37:46.234 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=087cde82-9521-44a2-b348-4ff5cf0bfa6e 00:37:46.234 13:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7eec3bee-251b-4975-ba43-a84d47533a21 30 00:37:46.493 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 087cde82-9521-44a2-b348-4ff5cf0bfa6e MY_CLONE 00:37:46.752 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a1b00460-5f66-4e4b-9c0a-f460a91dd658 00:37:46.752 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a1b00460-5f66-4e4b-9c0a-f460a91dd658 00:37:47.011 13:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1235934 00:37:55.129 Initializing NVMe Controllers 00:37:55.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:55.129 Controller IO queue size 128, less than required. 00:37:55.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:55.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:55.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:55.130 Initialization complete. Launching workers. 00:37:55.130 ======================================================== 00:37:55.130 Latency(us) 00:37:55.130 Device Information : IOPS MiB/s Average min max 00:37:55.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12244.94 47.83 10454.34 272.14 51770.13 00:37:55.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12114.74 47.32 10569.25 5377.67 63494.31 00:37:55.130 ======================================================== 00:37:55.130 Total : 24359.69 95.16 10511.49 272.14 63494.31 00:37:55.130 00:37:55.130 13:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.389 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7eec3bee-251b-4975-ba43-a84d47533a21 00:37:55.648 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04e8c398-1861-4136-adb3-c086c9995d16 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.907 rmmod nvme_tcp 00:37:55.907 rmmod nvme_fabrics 00:37:55.907 rmmod nvme_keyring 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1235667 ']' 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1235667 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1235667 ']' 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1235667 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235667 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235667' 00:37:55.907 killing process with pid 1235667 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1235667 00:37:55.907 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1235667 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.167 13:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.079 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:58.079 00:37:58.079 real 0m21.620s 00:37:58.079 user 0m55.323s 00:37:58.079 sys 0m9.490s 00:37:58.079 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.079 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.079 ************************************ 00:37:58.079 END TEST nvmf_lvol 00:37:58.079 ************************************ 00:37:58.338 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:58.338 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:58.338 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.338 13:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:58.338 ************************************ 00:37:58.338 START TEST nvmf_lvs_grow 00:37:58.338 ************************************ 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:58.338 * Looking for test storage... 00:37:58.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:58.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.338 --rc genhtml_branch_coverage=1 00:37:58.338 --rc genhtml_function_coverage=1 00:37:58.338 --rc genhtml_legend=1 00:37:58.338 --rc geninfo_all_blocks=1 00:37:58.338 --rc geninfo_unexecuted_blocks=1 00:37:58.338 00:37:58.338 ' 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:58.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.338 --rc genhtml_branch_coverage=1 00:37:58.338 --rc genhtml_function_coverage=1 00:37:58.338 --rc genhtml_legend=1 00:37:58.338 --rc geninfo_all_blocks=1 00:37:58.338 --rc geninfo_unexecuted_blocks=1 00:37:58.338 00:37:58.338 ' 00:37:58.338 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:58.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.339 --rc genhtml_branch_coverage=1 00:37:58.339 --rc genhtml_function_coverage=1 00:37:58.339 --rc genhtml_legend=1 00:37:58.339 --rc geninfo_all_blocks=1 00:37:58.339 --rc geninfo_unexecuted_blocks=1 00:37:58.339 00:37:58.339 ' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:58.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:58.339 --rc genhtml_branch_coverage=1 00:37:58.339 --rc genhtml_function_coverage=1 00:37:58.339 --rc genhtml_legend=1 00:37:58.339 --rc geninfo_all_blocks=1 00:37:58.339 --rc geninfo_unexecuted_blocks=1 00:37:58.339 00:37:58.339 ' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:58.339 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:58.598 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:58.598 13:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:05.171 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:05.171 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.171 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:05.172 Found net devices under 0000:af:00.0: cvl_0_0 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:05.172 Found net devices under 0000:af:00.1: cvl_0_1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:05.172 13:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:05.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:38:05.172 00:38:05.172 --- 10.0.0.2 ping statistics --- 00:38:05.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.172 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:38:05.172 00:38:05.172 --- 10.0.0.1 ping statistics --- 00:38:05.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.172 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1241681 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1241681 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1241681 ']' 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.172 [2024-12-15 13:18:12.151842] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:05.172 [2024-12-15 13:18:12.152771] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:05.172 [2024-12-15 13:18:12.152808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:05.172 [2024-12-15 13:18:12.232718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.172 [2024-12-15 13:18:12.254042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:05.172 [2024-12-15 13:18:12.254078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:05.172 [2024-12-15 13:18:12.254086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:05.172 [2024-12-15 13:18:12.254092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:05.172 [2024-12-15 13:18:12.254097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:05.172 [2024-12-15 13:18:12.254564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.172 [2024-12-15 13:18:12.317358] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:05.172 [2024-12-15 13:18:12.317557] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.172 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:05.173 [2024-12-15 13:18:12.551208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:05.173 ************************************ 00:38:05.173 START TEST lvs_grow_clean 00:38:05.173 ************************************ 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:05.173 13:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:05.173 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f72a039-4d04-4408-b88b-008fcd833a89 00:38:05.173 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:05.173 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:05.516 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:05.516 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:05.516 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f72a039-4d04-4408-b88b-008fcd833a89 lvol 150 00:38:05.775 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=337a6ec0-dfad-406c-accd-ad341fe3bc85 00:38:05.775 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:05.775 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:05.775 [2024-12-15 13:18:13.619035] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:05.775 [2024-12-15 13:18:13.619165] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:05.775 true 00:38:05.776 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:05.776 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:06.034 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:06.035 13:18:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:06.293 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 337a6ec0-dfad-406c-accd-ad341fe3bc85 00:38:06.294 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:06.553 [2024-12-15 13:18:14.367427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.553 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1242070 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1242070 /var/tmp/bdevperf.sock 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1242070 ']' 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:06.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.812 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:06.812 [2024-12-15 13:18:14.614338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:06.812 [2024-12-15 13:18:14.614383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242070 ] 00:38:06.812 [2024-12-15 13:18:14.689588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.812 [2024-12-15 13:18:14.711965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.071 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.071 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:07.071 13:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:07.330 Nvme0n1 00:38:07.330 13:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:07.330 [ 00:38:07.330 { 00:38:07.330 "name": "Nvme0n1", 00:38:07.330 "aliases": [ 00:38:07.330 "337a6ec0-dfad-406c-accd-ad341fe3bc85" 00:38:07.330 ], 00:38:07.330 "product_name": "NVMe disk", 00:38:07.330 "block_size": 4096, 00:38:07.330 "num_blocks": 38912, 00:38:07.330 "uuid": "337a6ec0-dfad-406c-accd-ad341fe3bc85", 00:38:07.330 "numa_id": 1, 00:38:07.330 "assigned_rate_limits": { 00:38:07.330 "rw_ios_per_sec": 0, 00:38:07.330 "rw_mbytes_per_sec": 0, 00:38:07.330 "r_mbytes_per_sec": 0, 00:38:07.330 "w_mbytes_per_sec": 0 00:38:07.330 }, 00:38:07.330 "claimed": false, 00:38:07.330 "zoned": false, 00:38:07.330 "supported_io_types": { 00:38:07.330 "read": true, 00:38:07.330 "write": true, 00:38:07.330 "unmap": true, 00:38:07.330 "flush": true, 00:38:07.330 "reset": true, 00:38:07.330 "nvme_admin": true, 00:38:07.330 "nvme_io": true, 00:38:07.330 "nvme_io_md": false, 00:38:07.330 "write_zeroes": true, 00:38:07.330 "zcopy": false, 00:38:07.330 "get_zone_info": false, 00:38:07.330 "zone_management": false, 00:38:07.330 "zone_append": false, 00:38:07.330 "compare": true, 00:38:07.330 "compare_and_write": true, 00:38:07.330 "abort": true, 00:38:07.330 "seek_hole": false, 00:38:07.330 "seek_data": false, 00:38:07.330 "copy": true, 00:38:07.330 "nvme_iov_md": false 00:38:07.330 }, 00:38:07.330 "memory_domains": [ 00:38:07.330 { 00:38:07.330 "dma_device_id": "system", 00:38:07.330 "dma_device_type": 1 00:38:07.330 } 00:38:07.330 ], 00:38:07.330 "driver_specific": { 00:38:07.330 "nvme": [ 00:38:07.330 { 00:38:07.330 "trid": { 00:38:07.330 "trtype": "TCP", 00:38:07.330 "adrfam": "IPv4", 00:38:07.330 "traddr": "10.0.0.2", 00:38:07.330 "trsvcid": "4420", 00:38:07.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:07.330 }, 00:38:07.330 "ctrlr_data": { 00:38:07.330 "cntlid": 1, 00:38:07.330 "vendor_id": "0x8086", 00:38:07.330 "model_number": "SPDK bdev Controller", 00:38:07.330 "serial_number": "SPDK0", 00:38:07.330 "firmware_revision": "25.01", 00:38:07.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.330 "oacs": { 00:38:07.330 "security": 0, 00:38:07.330 "format": 0, 00:38:07.330 "firmware": 0, 00:38:07.330 "ns_manage": 0 00:38:07.330 }, 00:38:07.330 "multi_ctrlr": true, 00:38:07.330 "ana_reporting": false 00:38:07.330 }, 00:38:07.330 "vs": { 00:38:07.330 "nvme_version": "1.3" 00:38:07.330 }, 00:38:07.330 "ns_data": { 00:38:07.330 "id": 1, 00:38:07.330 "can_share": true 00:38:07.330 } 00:38:07.330 } 00:38:07.330 ], 00:38:07.330 "mp_policy": "active_passive" 00:38:07.330 } 00:38:07.330 } 00:38:07.330 ] 00:38:07.330 13:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1242177 00:38:07.330 13:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:07.330 13:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:07.588 Running I/O for 10 seconds... 00:38:08.524 Latency(us) 00:38:08.524 [2024-12-15T12:18:16.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.524 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:08.524 [2024-12-15T12:18:16.431Z] =================================================================================================================== 00:38:08.524 [2024-12-15T12:18:16.431Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:08.524 00:38:09.460 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:09.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.460 Nvme0n1 : 2.00 23273.00 90.91 0.00 0.00 0.00 0.00 0.00 00:38:09.460 [2024-12-15T12:18:17.367Z] =================================================================================================================== 00:38:09.460 [2024-12-15T12:18:17.367Z] Total : 23273.00 90.91 0.00 0.00 0.00 0.00 0.00 00:38:09.460 00:38:09.719 true 00:38:09.719 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:09.719 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:09.719 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:09.719 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:09.719 13:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1242177 00:38:10.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:10.656 Nvme0n1 : 3.00 23389.33 91.36 0.00 0.00 0.00 0.00 0.00 00:38:10.656 [2024-12-15T12:18:18.563Z] =================================================================================================================== 00:38:10.656 [2024-12-15T12:18:18.563Z] Total : 23389.33 91.36 0.00 0.00 0.00 0.00 0.00 00:38:10.656 00:38:11.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.593 Nvme0n1 : 4.00 23479.25 91.72 0.00 0.00 0.00 0.00 0.00 00:38:11.593 [2024-12-15T12:18:19.500Z] =================================================================================================================== 00:38:11.593 [2024-12-15T12:18:19.500Z] Total : 23479.25 91.72 0.00 0.00 0.00 0.00 0.00 00:38:11.593 00:38:12.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.529 Nvme0n1 : 5.00 23514.60 91.85 0.00 0.00 0.00 0.00 0.00 00:38:12.529 [2024-12-15T12:18:20.436Z] =================================================================================================================== 00:38:12.529 [2024-12-15T12:18:20.436Z] Total : 23514.60 91.85 0.00 0.00 0.00 0.00 0.00 00:38:12.529 00:38:13.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.464 Nvme0n1 : 6.00 23564.33 92.05 0.00 0.00 0.00 0.00 0.00 00:38:13.464 [2024-12-15T12:18:21.371Z] =================================================================================================================== 00:38:13.464 [2024-12-15T12:18:21.371Z] Total : 23564.33 92.05 0.00 0.00 0.00 0.00 0.00 00:38:13.464 00:38:14.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.842 Nvme0n1 : 7.00 23606.71 92.21 0.00 0.00 0.00 0.00 0.00 00:38:14.842 [2024-12-15T12:18:22.749Z] =================================================================================================================== 00:38:14.842 [2024-12-15T12:18:22.749Z] Total : 23606.71 92.21 0.00 0.00 0.00 0.00 0.00 00:38:14.842 00:38:15.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.778 Nvme0n1 : 8.00 23624.50 92.28 0.00 0.00 0.00 0.00 0.00 00:38:15.778 [2024-12-15T12:18:23.685Z] =================================================================================================================== 00:38:15.778 [2024-12-15T12:18:23.685Z] Total : 23624.50 92.28 0.00 0.00 0.00 0.00 0.00 00:38:15.778 00:38:16.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.714 Nvme0n1 : 9.00 23652.44 92.39 0.00 0.00 0.00 0.00 0.00 00:38:16.714 [2024-12-15T12:18:24.622Z] =================================================================================================================== 00:38:16.715 [2024-12-15T12:18:24.622Z] Total : 23652.44 92.39 0.00 0.00 0.00 0.00 0.00 00:38:16.715 00:38:17.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.650 Nvme0n1 : 10.00 23662.10 92.43 0.00 0.00 0.00 0.00 0.00 00:38:17.650 [2024-12-15T12:18:25.557Z] =================================================================================================================== 00:38:17.650 [2024-12-15T12:18:25.557Z] Total : 23662.10 92.43 0.00 0.00 0.00 0.00 0.00 00:38:17.650 00:38:17.650 00:38:17.650 Latency(us) 00:38:17.650 [2024-12-15T12:18:25.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.650 Nvme0n1 : 10.01 23662.37 92.43 0.00 0.00 5406.55 3120.76 27088.21 00:38:17.650 [2024-12-15T12:18:25.557Z] =================================================================================================================== 00:38:17.650 [2024-12-15T12:18:25.557Z] Total : 23662.37 92.43 0.00 0.00 5406.55 3120.76 27088.21 00:38:17.650 { 00:38:17.650 "results": [ 00:38:17.650 { 00:38:17.650 "job": "Nvme0n1", 00:38:17.650 "core_mask": "0x2", 00:38:17.650 "workload": "randwrite", 00:38:17.650 "status": "finished", 00:38:17.650 "queue_depth": 128, 00:38:17.650 "io_size": 4096, 00:38:17.650 "runtime": 10.005295, 00:38:17.650 "iops": 23662.37077467481, 00:38:17.650 "mibps": 92.43113583857348, 00:38:17.650 "io_failed": 0, 00:38:17.650 "io_timeout": 0, 00:38:17.650 "avg_latency_us": 5406.550911330846, 00:38:17.650 "min_latency_us": 3120.7619047619046, 00:38:17.650 "max_latency_us": 27088.213333333333 00:38:17.650 } 00:38:17.650 ], 00:38:17.650 "core_count": 1 00:38:17.650 } 00:38:17.650 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1242070 00:38:17.650 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1242070 ']' 00:38:17.650 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1242070 00:38:17.650 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242070 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242070' 00:38:17.651 killing process with pid 1242070 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1242070 00:38:17.651 Received shutdown signal, test time was about 10.000000 seconds 00:38:17.651 00:38:17.651 Latency(us) 00:38:17.651 [2024-12-15T12:18:25.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:17.651 [2024-12-15T12:18:25.558Z] =================================================================================================================== 00:38:17.651 [2024-12-15T12:18:25.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:17.651 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1242070 00:38:17.909 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:17.909 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:18.168 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:18.168 13:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:18.427 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:18.427 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:18.427 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:18.427 [2024-12-15 13:18:26.323100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:18.686 request: 00:38:18.686 { 00:38:18.686 "uuid": "8f72a039-4d04-4408-b88b-008fcd833a89", 00:38:18.686 "method": "bdev_lvol_get_lvstores", 00:38:18.686 "req_id": 1 00:38:18.686 } 00:38:18.686 Got JSON-RPC error response 00:38:18.686 response: 00:38:18.686 { 00:38:18.686 "code": -19, 00:38:18.686 "message": "No such device" 00:38:18.686 } 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:18.686 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:18.945 aio_bdev 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 337a6ec0-dfad-406c-accd-ad341fe3bc85 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=337a6ec0-dfad-406c-accd-ad341fe3bc85 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:18.945 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:19.204 13:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 337a6ec0-dfad-406c-accd-ad341fe3bc85 -t 2000 00:38:19.204 [ 00:38:19.204 { 00:38:19.204 "name": "337a6ec0-dfad-406c-accd-ad341fe3bc85", 00:38:19.204 "aliases": [ 00:38:19.204 "lvs/lvol" 00:38:19.204 ], 00:38:19.204 "product_name": "Logical Volume", 00:38:19.204 "block_size": 4096, 00:38:19.204 "num_blocks": 38912, 00:38:19.204 "uuid": "337a6ec0-dfad-406c-accd-ad341fe3bc85", 00:38:19.204 "assigned_rate_limits": { 00:38:19.204 "rw_ios_per_sec": 0, 00:38:19.204 "rw_mbytes_per_sec": 0, 00:38:19.204 "r_mbytes_per_sec": 0, 00:38:19.204 "w_mbytes_per_sec": 0 00:38:19.204 }, 00:38:19.204 "claimed": false, 00:38:19.204 "zoned": false, 00:38:19.204 "supported_io_types": { 00:38:19.204 "read": true, 00:38:19.204 "write": true, 00:38:19.204 "unmap": true, 00:38:19.204 "flush": false, 00:38:19.204 "reset": true, 00:38:19.204 "nvme_admin": false, 00:38:19.204 "nvme_io": false, 00:38:19.204 "nvme_io_md": false, 00:38:19.204 "write_zeroes": true, 00:38:19.204 "zcopy": false, 00:38:19.204 "get_zone_info": false, 00:38:19.204 "zone_management": false, 00:38:19.204 "zone_append": false, 00:38:19.204 "compare": false, 00:38:19.204 "compare_and_write": false, 00:38:19.204 "abort": false, 00:38:19.204 "seek_hole": true, 00:38:19.204 "seek_data": true, 00:38:19.204 "copy": false, 00:38:19.204 "nvme_iov_md": false 00:38:19.204 }, 00:38:19.204 "driver_specific": { 00:38:19.204 "lvol": { 00:38:19.204 "lvol_store_uuid": "8f72a039-4d04-4408-b88b-008fcd833a89", 00:38:19.204 "base_bdev": "aio_bdev", 00:38:19.204 "thin_provision": false, 00:38:19.204 "num_allocated_clusters": 38, 00:38:19.204 "snapshot": false, 00:38:19.204 "clone": false, 00:38:19.204 "esnap_clone": false 00:38:19.204 } 00:38:19.204 } 00:38:19.204 } 00:38:19.204 ] 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:19.463 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:19.722 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:19.722 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 337a6ec0-dfad-406c-accd-ad341fe3bc85 00:38:19.981 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f72a039-4d04-4408-b88b-008fcd833a89 00:38:19.981 13:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.240 00:38:20.240 real 0m15.476s 00:38:20.240 user 0m14.961s 00:38:20.240 sys 0m1.474s 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:20.240 ************************************ 00:38:20.240 END TEST lvs_grow_clean 00:38:20.240 ************************************ 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:20.240 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:20.499 ************************************ 00:38:20.499 START TEST lvs_grow_dirty 00:38:20.499 ************************************ 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:20.499 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:20.758 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=16fe1e4a-1a04-4879-8628-755c5682770c 00:38:20.758 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:20.758 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:21.017 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:21.017 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:21.017 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16fe1e4a-1a04-4879-8628-755c5682770c lvol 150 00:38:21.276 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:21.276 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:21.276 13:18:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:21.276 [2024-12-15 13:18:29.146966] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:21.276 [2024-12-15 13:18:29.147094] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:21.276 true 00:38:21.276 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:21.276 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:21.535 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:21.535 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:21.793 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:22.052 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:22.052 [2024-12-15 13:18:29.891445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:22.052 13:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:22.310 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244467 00:38:22.310 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:22.310 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244467 /var/tmp/bdevperf.sock 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1244467 ']' 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:22.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:22.311 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:22.311 [2024-12-15 13:18:30.157421] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:22.311 [2024-12-15 13:18:30.157480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244467 ] 00:38:22.569 [2024-12-15 13:18:30.233571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.570 [2024-12-15 13:18:30.256126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:22.570 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:22.570 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:22.570 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:22.828 Nvme0n1 00:38:22.828 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:23.087 [ 00:38:23.087 { 00:38:23.087 "name": "Nvme0n1", 00:38:23.087 "aliases": [ 00:38:23.087 "fd1dcaa9-00c5-449b-8310-62d824491e6f" 00:38:23.087 ], 00:38:23.087 "product_name": "NVMe disk", 00:38:23.087 "block_size": 4096, 00:38:23.087 "num_blocks": 38912, 00:38:23.087 "uuid": "fd1dcaa9-00c5-449b-8310-62d824491e6f", 00:38:23.087 "numa_id": 1, 00:38:23.087 "assigned_rate_limits": { 00:38:23.087 "rw_ios_per_sec": 0, 00:38:23.087 "rw_mbytes_per_sec": 0, 00:38:23.087 "r_mbytes_per_sec": 0, 00:38:23.087 "w_mbytes_per_sec": 0 00:38:23.087 }, 00:38:23.087 "claimed": false, 00:38:23.087 "zoned": false, 00:38:23.087 "supported_io_types": { 00:38:23.087 "read": true, 00:38:23.087 "write": true, 00:38:23.087 "unmap": true, 00:38:23.087 "flush": true, 00:38:23.087 "reset": true, 00:38:23.087 "nvme_admin": true, 00:38:23.087 "nvme_io": true, 00:38:23.087 "nvme_io_md": false, 00:38:23.087 "write_zeroes": true, 00:38:23.087 "zcopy": false, 00:38:23.087 "get_zone_info": false, 00:38:23.087 "zone_management": false, 00:38:23.087 "zone_append": false, 00:38:23.087 "compare": true, 00:38:23.087 "compare_and_write": true, 00:38:23.087 "abort": true, 00:38:23.087 "seek_hole": false, 00:38:23.087 "seek_data": false, 00:38:23.087 "copy": true, 00:38:23.087 "nvme_iov_md": false 00:38:23.087 }, 00:38:23.087 "memory_domains": [ 00:38:23.087 { 00:38:23.087 "dma_device_id": "system", 00:38:23.087 "dma_device_type": 1 00:38:23.087 } 00:38:23.087 ], 00:38:23.087 "driver_specific": { 00:38:23.087 "nvme": [ 00:38:23.087 { 00:38:23.087 "trid": { 00:38:23.087 "trtype": "TCP", 00:38:23.087 "adrfam": "IPv4", 00:38:23.087 "traddr": "10.0.0.2", 00:38:23.087 "trsvcid": "4420", 00:38:23.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:23.087 }, 00:38:23.087 "ctrlr_data": { 00:38:23.087 "cntlid": 1, 00:38:23.087 "vendor_id": "0x8086", 00:38:23.087 "model_number": "SPDK bdev Controller", 00:38:23.087 "serial_number": "SPDK0", 00:38:23.087 "firmware_revision": "25.01", 00:38:23.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.087 "oacs": { 00:38:23.087 "security": 0, 00:38:23.087 "format": 0, 00:38:23.087 "firmware": 0, 00:38:23.087 "ns_manage": 0 00:38:23.087 }, 00:38:23.087 "multi_ctrlr": true, 00:38:23.087 "ana_reporting": false 00:38:23.087 }, 00:38:23.087 "vs": { 00:38:23.087 "nvme_version": "1.3" 00:38:23.087 }, 00:38:23.087 "ns_data": { 00:38:23.087 "id": 1, 00:38:23.087 "can_share": true 00:38:23.087 } 00:38:23.087 } 00:38:23.087 ], 00:38:23.087 "mp_policy": "active_passive" 00:38:23.087 } 00:38:23.087 } 00:38:23.087 ] 00:38:23.088 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1244686 00:38:23.088 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:23.088 13:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:23.088 Running I/O for 10 seconds... 00:38:24.463 Latency(us) 00:38:24.463 [2024-12-15T12:18:32.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.463 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:24.463 [2024-12-15T12:18:32.370Z] =================================================================================================================== 00:38:24.463 [2024-12-15T12:18:32.370Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:24.463 00:38:25.030 13:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:25.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.289 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:25.289 [2024-12-15T12:18:33.196Z] =================================================================================================================== 00:38:25.289 [2024-12-15T12:18:33.196Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:25.289 00:38:25.289 true 00:38:25.289 13:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:25.289 13:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:25.548 13:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:25.548 13:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:25.548 13:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1244686 00:38:26.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.115 Nvme0n1 : 3.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:26.115 [2024-12-15T12:18:34.022Z] =================================================================================================================== 00:38:26.115 [2024-12-15T12:18:34.022Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:26.115 00:38:27.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.501 Nvme0n1 : 4.00 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:38:27.501 [2024-12-15T12:18:35.408Z] =================================================================================================================== 00:38:27.501 [2024-12-15T12:18:35.408Z] Total : 23463.25 91.65 0.00 0.00 0.00 0.00 0.00 00:38:27.501 00:38:28.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.437 Nvme0n1 : 5.00 23527.20 91.90 0.00 0.00 0.00 0.00 0.00 00:38:28.437 [2024-12-15T12:18:36.344Z] =================================================================================================================== 00:38:28.437 [2024-12-15T12:18:36.344Z] Total : 23527.20 91.90 0.00 0.00 0.00 0.00 0.00 00:38:28.437 00:38:29.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.373 Nvme0n1 : 6.00 23574.83 92.09 0.00 0.00 0.00 0.00 0.00 00:38:29.373 [2024-12-15T12:18:37.280Z] =================================================================================================================== 00:38:29.373 [2024-12-15T12:18:37.280Z] Total : 23574.83 92.09 0.00 0.00 0.00 0.00 0.00 00:38:29.373 00:38:30.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.309 Nvme0n1 : 7.00 23597.57 92.18 0.00 0.00 0.00 0.00 0.00 00:38:30.309 [2024-12-15T12:18:38.216Z] =================================================================================================================== 00:38:30.309 [2024-12-15T12:18:38.216Z] Total : 23597.57 92.18 0.00 0.00 0.00 0.00 0.00 00:38:30.309 00:38:31.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.245 Nvme0n1 : 8.00 23624.50 92.28 0.00 0.00 0.00 0.00 0.00 00:38:31.245 [2024-12-15T12:18:39.152Z] =================================================================================================================== 00:38:31.245 [2024-12-15T12:18:39.152Z] Total : 23624.50 92.28 0.00 0.00 0.00 0.00 0.00 00:38:31.245 00:38:32.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.181 Nvme0n1 : 9.00 23649.11 92.38 0.00 0.00 0.00 0.00 0.00 00:38:32.181 [2024-12-15T12:18:40.088Z] =================================================================================================================== 00:38:32.181 [2024-12-15T12:18:40.088Z] Total : 23649.11 92.38 0.00 0.00 0.00 0.00 0.00 00:38:32.181 00:38:33.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.117 Nvme0n1 : 10.00 23659.10 92.42 0.00 0.00 0.00 0.00 0.00 00:38:33.117 [2024-12-15T12:18:41.024Z] =================================================================================================================== 00:38:33.117 [2024-12-15T12:18:41.024Z] Total : 23659.10 92.42 0.00 0.00 0.00 0.00 0.00 00:38:33.117 00:38:33.117 00:38:33.117 Latency(us) 00:38:33.117 [2024-12-15T12:18:41.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:33.117 Nvme0n1 : 10.00 23662.53 92.43 0.00 0.00 5406.41 3120.76 25090.93 00:38:33.117 [2024-12-15T12:18:41.024Z] =================================================================================================================== 00:38:33.117 [2024-12-15T12:18:41.024Z] Total : 23662.53 92.43 0.00 0.00 5406.41 3120.76 25090.93 00:38:33.117 { 00:38:33.117 "results": [ 00:38:33.117 { 00:38:33.117 "job": "Nvme0n1", 00:38:33.117 "core_mask": "0x2", 00:38:33.117 "workload": "randwrite", 00:38:33.117 "status": "finished", 00:38:33.117 "queue_depth": 128, 00:38:33.117 "io_size": 4096, 00:38:33.117 "runtime": 10.00396, 00:38:33.117 "iops": 23662.529638263248, 00:38:33.117 "mibps": 92.43175639946581, 00:38:33.117 "io_failed": 0, 00:38:33.117 "io_timeout": 0, 00:38:33.117 "avg_latency_us": 5406.406663476225, 00:38:33.117 "min_latency_us": 3120.7619047619046, 00:38:33.117 "max_latency_us": 25090.925714285713 00:38:33.117 } 00:38:33.117 ], 00:38:33.117 "core_count": 1 00:38:33.117 } 00:38:33.117 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244467 00:38:33.117 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1244467 ']' 00:38:33.117 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1244467 00:38:33.117 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244467 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244467' 00:38:33.376 killing process with pid 1244467 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1244467 00:38:33.376 Received shutdown signal, test time was about 10.000000 seconds 00:38:33.376 00:38:33.376 Latency(us) 00:38:33.376 [2024-12-15T12:18:41.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.376 [2024-12-15T12:18:41.283Z] =================================================================================================================== 00:38:33.376 [2024-12-15T12:18:41.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1244467 00:38:33.376 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:33.634 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:33.893 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:33.893 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1241681 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1241681 00:38:34.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1241681 Killed "${NVMF_APP[@]}" "$@" 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1246378 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1246378 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1246378 ']' 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.153 13:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:34.153 [2024-12-15 13:18:41.918594] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:34.153 [2024-12-15 13:18:41.919498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:34.153 [2024-12-15 13:18:41.919532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.153 [2024-12-15 13:18:41.999320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.153 [2024-12-15 13:18:42.020480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.153 [2024-12-15 13:18:42.020514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.153 [2024-12-15 13:18:42.020520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.153 [2024-12-15 13:18:42.020526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.153 [2024-12-15 13:18:42.020535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.153 [2024-12-15 13:18:42.021041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.412 [2024-12-15 13:18:42.084323] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.412 [2024-12-15 13:18:42.084539] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.412 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.412 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:34.412 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.412 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.412 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:34.413 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.413 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:34.672 [2024-12-15 13:18:42.322548] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:34.672 [2024-12-15 13:18:42.322748] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:34.672 [2024-12-15 13:18:42.322844] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:34.672 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd1dcaa9-00c5-449b-8310-62d824491e6f -t 2000 00:38:34.931 [ 00:38:34.931 { 00:38:34.931 "name": "fd1dcaa9-00c5-449b-8310-62d824491e6f", 00:38:34.931 "aliases": [ 00:38:34.931 "lvs/lvol" 00:38:34.931 ], 00:38:34.931 "product_name": "Logical Volume", 00:38:34.931 "block_size": 4096, 00:38:34.931 "num_blocks": 38912, 00:38:34.931 "uuid": "fd1dcaa9-00c5-449b-8310-62d824491e6f", 00:38:34.931 "assigned_rate_limits": { 00:38:34.931 "rw_ios_per_sec": 0, 00:38:34.931 "rw_mbytes_per_sec": 0, 00:38:34.931 "r_mbytes_per_sec": 0, 00:38:34.931 "w_mbytes_per_sec": 0 00:38:34.931 }, 00:38:34.931 "claimed": false, 00:38:34.931 "zoned": false, 00:38:34.931 "supported_io_types": { 00:38:34.931 "read": true, 00:38:34.931 "write": true, 00:38:34.931 "unmap": true, 00:38:34.931 "flush": false, 00:38:34.931 "reset": true, 00:38:34.931 "nvme_admin": false, 00:38:34.931 "nvme_io": false, 00:38:34.931 "nvme_io_md": false, 00:38:34.931 "write_zeroes": true, 00:38:34.931 "zcopy": false, 00:38:34.931 "get_zone_info": false, 00:38:34.931 "zone_management": false, 00:38:34.931 "zone_append": false, 00:38:34.931 "compare": false, 00:38:34.931 "compare_and_write": false, 00:38:34.931 "abort": false, 00:38:34.931 "seek_hole": true, 00:38:34.931 "seek_data": true, 00:38:34.931 "copy": false, 00:38:34.931 "nvme_iov_md": false 00:38:34.931 }, 00:38:34.931 "driver_specific": { 00:38:34.931 "lvol": { 00:38:34.931 "lvol_store_uuid": "16fe1e4a-1a04-4879-8628-755c5682770c", 00:38:34.931 "base_bdev": "aio_bdev", 00:38:34.931 "thin_provision": false, 00:38:34.931 "num_allocated_clusters": 38, 00:38:34.931 "snapshot": false, 00:38:34.931 "clone": false, 00:38:34.931 "esnap_clone": false 00:38:34.931 } 00:38:34.931 } 00:38:34.931 } 00:38:34.931 ] 00:38:34.931 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:34.931 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:34.931 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:35.190 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:35.190 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:35.190 13:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:35.448 [2024-12-15 13:18:43.281477] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:35.448 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:35.707 request: 00:38:35.707 { 00:38:35.707 "uuid": "16fe1e4a-1a04-4879-8628-755c5682770c", 00:38:35.707 "method": "bdev_lvol_get_lvstores", 00:38:35.707 "req_id": 1 00:38:35.707 } 00:38:35.707 Got JSON-RPC error response 00:38:35.707 response: 00:38:35.707 { 00:38:35.707 "code": -19, 00:38:35.708 "message": "No such device" 00:38:35.708 } 00:38:35.708 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:35.708 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.708 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.708 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.708 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:35.967 aio_bdev 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:35.967 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:36.226 13:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd1dcaa9-00c5-449b-8310-62d824491e6f -t 2000 00:38:36.226 [ 00:38:36.226 { 00:38:36.226 "name": "fd1dcaa9-00c5-449b-8310-62d824491e6f", 00:38:36.226 "aliases": [ 00:38:36.226 "lvs/lvol" 00:38:36.226 ], 00:38:36.226 "product_name": "Logical Volume", 00:38:36.226 "block_size": 4096, 00:38:36.226 "num_blocks": 38912, 00:38:36.226 "uuid": "fd1dcaa9-00c5-449b-8310-62d824491e6f", 00:38:36.226 "assigned_rate_limits": { 00:38:36.226 "rw_ios_per_sec": 0, 00:38:36.226 "rw_mbytes_per_sec": 0, 00:38:36.226 "r_mbytes_per_sec": 0, 00:38:36.226 "w_mbytes_per_sec": 0 00:38:36.226 }, 00:38:36.226 "claimed": false, 00:38:36.226 "zoned": false, 00:38:36.226 "supported_io_types": { 00:38:36.226 "read": true, 00:38:36.226 "write": true, 00:38:36.226 "unmap": true, 00:38:36.226 "flush": false, 00:38:36.226 "reset": true, 00:38:36.226 "nvme_admin": false, 00:38:36.226 "nvme_io": false, 00:38:36.226 "nvme_io_md": false, 00:38:36.226 "write_zeroes": true, 00:38:36.226 "zcopy": false, 00:38:36.226 "get_zone_info": false, 00:38:36.226 "zone_management": false, 00:38:36.226 "zone_append": false, 00:38:36.226 "compare": false, 00:38:36.226 "compare_and_write": false, 00:38:36.226 "abort": false, 00:38:36.226 "seek_hole": true, 00:38:36.226 "seek_data": true, 00:38:36.226 "copy": false, 00:38:36.226 "nvme_iov_md": false 00:38:36.226 }, 00:38:36.226 "driver_specific": { 00:38:36.226 "lvol": { 00:38:36.226 "lvol_store_uuid": "16fe1e4a-1a04-4879-8628-755c5682770c", 00:38:36.226 "base_bdev": "aio_bdev", 00:38:36.226 "thin_provision": false, 00:38:36.226 "num_allocated_clusters": 38, 00:38:36.226 "snapshot": false, 00:38:36.226 "clone": false, 00:38:36.226 "esnap_clone": false 00:38:36.226 } 00:38:36.226 } 00:38:36.226 } 00:38:36.226 ] 00:38:36.226 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:36.226 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:36.226 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:36.485 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:36.485 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:36.485 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:36.744 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:36.744 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fd1dcaa9-00c5-449b-8310-62d824491e6f 00:38:37.003 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16fe1e4a-1a04-4879-8628-755c5682770c 00:38:37.003 13:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:37.262 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:37.262 00:38:37.262 real 0m16.981s 00:38:37.262 user 0m34.305s 00:38:37.262 sys 0m3.862s 00:38:37.262 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.262 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:37.262 ************************************ 00:38:37.262 END TEST lvs_grow_dirty 00:38:37.262 ************************************ 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:37.521 nvmf_trace.0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.521 rmmod nvme_tcp 00:38:37.521 rmmod nvme_fabrics 00:38:37.521 rmmod nvme_keyring 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1246378 ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1246378 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1246378 ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1246378 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246378 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246378' 00:38:37.521 killing process with pid 1246378 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1246378 00:38:37.521 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1246378 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.780 13:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.687 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:39.687 00:38:39.687 real 0m41.548s 00:38:39.687 user 0m51.673s 00:38:39.687 sys 0m10.238s 00:38:39.687 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.687 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:39.687 ************************************ 00:38:39.687 END TEST nvmf_lvs_grow 00:38:39.687 ************************************ 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:39.947 ************************************ 00:38:39.947 START TEST nvmf_bdev_io_wait 00:38:39.947 ************************************ 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:39.947 * Looking for test storage... 00:38:39.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.947 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:39.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.947 --rc genhtml_branch_coverage=1 00:38:39.947 --rc genhtml_function_coverage=1 00:38:39.948 --rc genhtml_legend=1 00:38:39.948 --rc geninfo_all_blocks=1 00:38:39.948 --rc geninfo_unexecuted_blocks=1 00:38:39.948 00:38:39.948 ' 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.948 --rc genhtml_branch_coverage=1 00:38:39.948 --rc genhtml_function_coverage=1 00:38:39.948 --rc genhtml_legend=1 00:38:39.948 --rc geninfo_all_blocks=1 00:38:39.948 --rc geninfo_unexecuted_blocks=1 00:38:39.948 00:38:39.948 ' 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.948 --rc genhtml_branch_coverage=1 00:38:39.948 --rc genhtml_function_coverage=1 00:38:39.948 --rc genhtml_legend=1 00:38:39.948 --rc geninfo_all_blocks=1 00:38:39.948 --rc geninfo_unexecuted_blocks=1 00:38:39.948 00:38:39.948 ' 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:39.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.948 --rc genhtml_branch_coverage=1 00:38:39.948 --rc genhtml_function_coverage=1 00:38:39.948 --rc genhtml_legend=1 00:38:39.948 --rc geninfo_all_blocks=1 00:38:39.948 --rc geninfo_unexecuted_blocks=1 00:38:39.948 00:38:39.948 ' 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.948 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.207 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.208 13:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:46.780 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.780 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:46.780 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:46.781 Found net devices under 0000:af:00.0: cvl_0_0 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:46.781 Found net devices under 0000:af:00.1: cvl_0_1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:46.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:38:46.781 00:38:46.781 --- 10.0.0.2 ping statistics --- 00:38:46.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.781 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:38:46.781 00:38:46.781 --- 10.0.0.1 ping statistics --- 00:38:46.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.781 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1250440 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1250440 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1250440 ']' 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:46.781 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.781 [2024-12-15 13:18:53.803903] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:46.781 [2024-12-15 13:18:53.804830] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.781 [2024-12-15 13:18:53.804865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:46.781 [2024-12-15 13:18:53.881055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:46.781 [2024-12-15 13:18:53.905490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:46.781 [2024-12-15 13:18:53.905529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:46.781 [2024-12-15 13:18:53.905536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:46.781 [2024-12-15 13:18:53.905542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:46.782 [2024-12-15 13:18:53.905547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:46.782 [2024-12-15 13:18:53.906845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:46.782 [2024-12-15 13:18:53.906915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:46.782 [2024-12-15 13:18:53.907023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.782 [2024-12-15 13:18:53.907024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:46.782 [2024-12-15 13:18:53.907383] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 [2024-12-15 13:18:54.055868] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:46.782 [2024-12-15 13:18:54.056327] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:46.782 [2024-12-15 13:18:54.056420] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:46.782 [2024-12-15 13:18:54.056564] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 [2024-12-15 13:18:54.067812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 Malloc0 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:46.782 [2024-12-15 13:18:54.144055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1250472 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1250474 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:46.782 { 00:38:46.782 "params": { 00:38:46.782 "name": "Nvme$subsystem", 00:38:46.782 "trtype": "$TEST_TRANSPORT", 00:38:46.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.782 "adrfam": "ipv4", 00:38:46.782 "trsvcid": "$NVMF_PORT", 00:38:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.782 "hdgst": ${hdgst:-false}, 00:38:46.782 "ddgst": ${ddgst:-false} 00:38:46.782 }, 00:38:46.782 "method": "bdev_nvme_attach_controller" 00:38:46.782 } 00:38:46.782 EOF 00:38:46.782 )") 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1250476 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:46.782 { 00:38:46.782 "params": { 00:38:46.782 "name": "Nvme$subsystem", 00:38:46.782 "trtype": "$TEST_TRANSPORT", 00:38:46.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.782 "adrfam": "ipv4", 00:38:46.782 "trsvcid": "$NVMF_PORT", 00:38:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.782 "hdgst": ${hdgst:-false}, 00:38:46.782 "ddgst": ${ddgst:-false} 00:38:46.782 }, 00:38:46.782 "method": "bdev_nvme_attach_controller" 00:38:46.782 } 00:38:46.782 EOF 00:38:46.782 )") 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1250479 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:46.782 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:46.783 { 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme$subsystem", 00:38:46.783 "trtype": "$TEST_TRANSPORT", 00:38:46.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "$NVMF_PORT", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.783 "hdgst": ${hdgst:-false}, 00:38:46.783 "ddgst": ${ddgst:-false} 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 } 00:38:46.783 EOF 00:38:46.783 )") 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:46.783 { 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme$subsystem", 00:38:46.783 "trtype": "$TEST_TRANSPORT", 00:38:46.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "$NVMF_PORT", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:46.783 "hdgst": ${hdgst:-false}, 00:38:46.783 "ddgst": ${ddgst:-false} 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 } 00:38:46.783 EOF 00:38:46.783 )") 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1250472 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme1", 00:38:46.783 "trtype": "tcp", 00:38:46.783 "traddr": "10.0.0.2", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "4420", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.783 "hdgst": false, 00:38:46.783 "ddgst": false 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 }' 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme1", 00:38:46.783 "trtype": "tcp", 00:38:46.783 "traddr": "10.0.0.2", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "4420", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.783 "hdgst": false, 00:38:46.783 "ddgst": false 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 }' 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme1", 00:38:46.783 "trtype": "tcp", 00:38:46.783 "traddr": "10.0.0.2", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "4420", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.783 "hdgst": false, 00:38:46.783 "ddgst": false 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 }' 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:46.783 13:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:46.783 "params": { 00:38:46.783 "name": "Nvme1", 00:38:46.783 "trtype": "tcp", 00:38:46.783 "traddr": "10.0.0.2", 00:38:46.783 "adrfam": "ipv4", 00:38:46.783 "trsvcid": "4420", 00:38:46.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.783 "hdgst": false, 00:38:46.783 "ddgst": false 00:38:46.783 }, 00:38:46.783 "method": "bdev_nvme_attach_controller" 00:38:46.783 }' 00:38:46.783 [2024-12-15 13:18:54.195426] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.783 [2024-12-15 13:18:54.195479] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:46.783 [2024-12-15 13:18:54.198454] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.783 [2024-12-15 13:18:54.198502] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:46.783 [2024-12-15 13:18:54.199086] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.783 [2024-12-15 13:18:54.199127] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:46.783 [2024-12-15 13:18:54.199802] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.783 [2024-12-15 13:18:54.199850] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:46.783 [2024-12-15 13:18:54.376348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.783 [2024-12-15 13:18:54.393535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:46.783 [2024-12-15 13:18:54.475419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.783 [2024-12-15 13:18:54.498051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:46.783 [2024-12-15 13:18:54.528710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.783 [2024-12-15 13:18:54.544780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:46.783 [2024-12-15 13:18:54.581585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.783 [2024-12-15 13:18:54.597554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:47.042 Running I/O for 1 seconds... 00:38:47.042 Running I/O for 1 seconds... 00:38:47.042 Running I/O for 1 seconds... 00:38:47.042 Running I/O for 1 seconds... 00:38:47.978 8607.00 IOPS, 33.62 MiB/s 00:38:47.978 Latency(us) 00:38:47.978 [2024-12-15T12:18:55.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.978 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:47.978 Nvme1n1 : 1.01 8613.46 33.65 0.00 0.00 14699.94 3214.38 26214.40 00:38:47.978 [2024-12-15T12:18:55.885Z] =================================================================================================================== 00:38:47.978 [2024-12-15T12:18:55.885Z] Total : 8613.46 33.65 0.00 0.00 14699.94 3214.38 26214.40 00:38:47.978 11783.00 IOPS, 46.03 MiB/s 00:38:47.978 Latency(us) 00:38:47.978 [2024-12-15T12:18:55.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.978 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:47.978 Nvme1n1 : 1.01 11824.22 46.19 0.00 0.00 10784.15 4369.07 14792.41 00:38:47.978 [2024-12-15T12:18:55.885Z] =================================================================================================================== 00:38:47.978 [2024-12-15T12:18:55.885Z] Total : 11824.22 46.19 0.00 0.00 10784.15 4369.07 14792.41 00:38:47.978 8565.00 IOPS, 33.46 MiB/s 00:38:47.978 Latency(us) 00:38:47.978 [2024-12-15T12:18:55.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.978 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:47.978 Nvme1n1 : 1.01 8690.82 33.95 0.00 0.00 14696.31 3261.20 29709.65 00:38:47.978 [2024-12-15T12:18:55.885Z] =================================================================================================================== 00:38:47.978 [2024-12-15T12:18:55.885Z] Total : 8690.82 33.95 0.00 0.00 14696.31 3261.20 29709.65 00:38:47.978 243864.00 IOPS, 952.59 MiB/s 00:38:47.978 Latency(us) 00:38:47.978 [2024-12-15T12:18:55.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.978 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:47.978 Nvme1n1 : 1.00 243498.23 951.16 0.00 0.00 523.22 222.35 1497.97 00:38:47.978 [2024-12-15T12:18:55.885Z] =================================================================================================================== 00:38:47.978 [2024-12-15T12:18:55.885Z] Total : 243498.23 951.16 0.00 0.00 523.22 222.35 1497.97 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1250474 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1250476 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1250479 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:48.237 13:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:48.237 rmmod nvme_tcp 00:38:48.237 rmmod nvme_fabrics 00:38:48.237 rmmod nvme_keyring 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1250440 ']' 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1250440 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1250440 ']' 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1250440 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250440 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250440' 00:38:48.237 killing process with pid 1250440 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1250440 00:38:48.237 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1250440 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.496 13:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:51.031 00:38:51.031 real 0m10.672s 00:38:51.031 user 0m14.783s 00:38:51.031 sys 0m6.243s 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:51.031 ************************************ 00:38:51.031 END TEST nvmf_bdev_io_wait 00:38:51.031 ************************************ 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:51.031 ************************************ 00:38:51.031 START TEST nvmf_queue_depth 00:38:51.031 ************************************ 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:51.031 * Looking for test storage... 00:38:51.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.031 --rc genhtml_branch_coverage=1 00:38:51.031 --rc genhtml_function_coverage=1 00:38:51.031 --rc genhtml_legend=1 00:38:51.031 --rc geninfo_all_blocks=1 00:38:51.031 --rc geninfo_unexecuted_blocks=1 00:38:51.031 00:38:51.031 ' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.031 --rc genhtml_branch_coverage=1 00:38:51.031 --rc genhtml_function_coverage=1 00:38:51.031 --rc genhtml_legend=1 00:38:51.031 --rc geninfo_all_blocks=1 00:38:51.031 --rc geninfo_unexecuted_blocks=1 00:38:51.031 00:38:51.031 ' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.031 --rc genhtml_branch_coverage=1 00:38:51.031 --rc genhtml_function_coverage=1 00:38:51.031 --rc genhtml_legend=1 00:38:51.031 --rc geninfo_all_blocks=1 00:38:51.031 --rc geninfo_unexecuted_blocks=1 00:38:51.031 00:38:51.031 ' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:51.031 --rc genhtml_branch_coverage=1 00:38:51.031 --rc genhtml_function_coverage=1 00:38:51.031 --rc genhtml_legend=1 00:38:51.031 --rc geninfo_all_blocks=1 00:38:51.031 --rc geninfo_unexecuted_blocks=1 00:38:51.031 00:38:51.031 ' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:51.031 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:51.032 13:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:56.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:56.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:56.413 Found net devices under 0000:af:00.0: cvl_0_0 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:56.413 Found net devices under 0000:af:00.1: cvl_0_1 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:56.413 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:56.414 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:56.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:56.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:38:56.673 00:38:56.673 --- 10.0.0.2 ping statistics --- 00:38:56.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.673 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:56.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:56.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:38:56.673 00:38:56.673 --- 10.0.0.1 ping statistics --- 00:38:56.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:56.673 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1254192 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1254192 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254192 ']' 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:56.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:56.673 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.673 [2024-12-15 13:19:04.465542] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:56.673 [2024-12-15 13:19:04.466427] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:56.673 [2024-12-15 13:19:04.466460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:56.673 [2024-12-15 13:19:04.547027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.673 [2024-12-15 13:19:04.567818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:56.673 [2024-12-15 13:19:04.567871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:56.673 [2024-12-15 13:19:04.567878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:56.673 [2024-12-15 13:19:04.567885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:56.673 [2024-12-15 13:19:04.567890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:56.673 [2024-12-15 13:19:04.568370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.933 [2024-12-15 13:19:04.630865] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:56.933 [2024-12-15 13:19:04.631069] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 [2024-12-15 13:19:04.697056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 Malloc0 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 [2024-12-15 13:19:04.773163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1254215 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1254215 /var/tmp/bdevperf.sock 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254215 ']' 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:56.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:56.933 13:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:56.933 [2024-12-15 13:19:04.825356] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:56.933 [2024-12-15 13:19:04.825403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254215 ] 00:38:57.192 [2024-12-15 13:19:04.902987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.192 [2024-12-15 13:19:04.925192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.192 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:57.192 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:57.192 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:57.192 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:57.192 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:57.451 NVMe0n1 00:38:57.451 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:57.451 13:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:57.451 Running I/O for 10 seconds... 00:38:59.762 11792.00 IOPS, 46.06 MiB/s [2024-12-15T12:19:08.604Z] 12145.00 IOPS, 47.44 MiB/s [2024-12-15T12:19:09.540Z] 12281.67 IOPS, 47.98 MiB/s [2024-12-15T12:19:10.475Z] 12353.00 IOPS, 48.25 MiB/s [2024-12-15T12:19:11.411Z] 12439.40 IOPS, 48.59 MiB/s [2024-12-15T12:19:12.346Z] 12460.00 IOPS, 48.67 MiB/s [2024-12-15T12:19:13.720Z] 12491.43 IOPS, 48.79 MiB/s [2024-12-15T12:19:14.655Z] 12533.50 IOPS, 48.96 MiB/s [2024-12-15T12:19:15.591Z] 12554.33 IOPS, 49.04 MiB/s [2024-12-15T12:19:15.591Z] 12587.10 IOPS, 49.17 MiB/s 00:39:07.685 Latency(us) 00:39:07.685 [2024-12-15T12:19:15.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:07.685 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:07.685 Verification LBA range: start 0x0 length 0x4000 00:39:07.685 NVMe0n1 : 10.06 12606.82 49.25 0.00 0.00 80965.05 19348.72 52179.14 00:39:07.685 [2024-12-15T12:19:15.592Z] =================================================================================================================== 00:39:07.685 [2024-12-15T12:19:15.592Z] Total : 12606.82 49.25 0.00 0.00 80965.05 19348.72 52179.14 00:39:07.685 { 00:39:07.685 "results": [ 00:39:07.685 { 00:39:07.685 "job": "NVMe0n1", 00:39:07.685 "core_mask": "0x1", 00:39:07.685 "workload": "verify", 00:39:07.685 "status": "finished", 00:39:07.685 "verify_range": { 00:39:07.685 "start": 0, 00:39:07.685 "length": 16384 00:39:07.685 }, 00:39:07.685 "queue_depth": 1024, 00:39:07.685 "io_size": 4096, 00:39:07.685 "runtime": 10.061059, 00:39:07.685 "iops": 12606.823993378828, 00:39:07.685 "mibps": 49.24540622413605, 00:39:07.685 "io_failed": 0, 00:39:07.685 "io_timeout": 0, 00:39:07.685 "avg_latency_us": 80965.0544352714, 00:39:07.685 "min_latency_us": 19348.72380952381, 00:39:07.685 "max_latency_us": 52179.13904761905 00:39:07.685 } 00:39:07.685 ], 00:39:07.685 "core_count": 1 00:39:07.685 } 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1254215 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254215 ']' 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254215 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254215 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254215' 00:39:07.685 killing process with pid 1254215 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254215 00:39:07.685 Received shutdown signal, test time was about 10.000000 seconds 00:39:07.685 00:39:07.685 Latency(us) 00:39:07.685 [2024-12-15T12:19:15.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:07.685 [2024-12-15T12:19:15.592Z] =================================================================================================================== 00:39:07.685 [2024-12-15T12:19:15.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:07.685 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254215 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:07.944 rmmod nvme_tcp 00:39:07.944 rmmod nvme_fabrics 00:39:07.944 rmmod nvme_keyring 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1254192 ']' 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1254192 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254192 ']' 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254192 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254192 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254192' 00:39:07.944 killing process with pid 1254192 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254192 00:39:07.944 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254192 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.204 13:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.109 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:10.109 00:39:10.109 real 0m19.592s 00:39:10.109 user 0m22.776s 00:39:10.109 sys 0m6.176s 00:39:10.109 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.109 13:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:10.109 ************************************ 00:39:10.109 END TEST nvmf_queue_depth 00:39:10.109 ************************************ 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:10.369 ************************************ 00:39:10.369 START TEST nvmf_target_multipath 00:39:10.369 ************************************ 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:10.369 * Looking for test storage... 00:39:10.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:10.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.369 --rc genhtml_branch_coverage=1 00:39:10.369 --rc genhtml_function_coverage=1 00:39:10.369 --rc genhtml_legend=1 00:39:10.369 --rc geninfo_all_blocks=1 00:39:10.369 --rc geninfo_unexecuted_blocks=1 00:39:10.369 00:39:10.369 ' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:10.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.369 --rc genhtml_branch_coverage=1 00:39:10.369 --rc genhtml_function_coverage=1 00:39:10.369 --rc genhtml_legend=1 00:39:10.369 --rc geninfo_all_blocks=1 00:39:10.369 --rc geninfo_unexecuted_blocks=1 00:39:10.369 00:39:10.369 ' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:10.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.369 --rc genhtml_branch_coverage=1 00:39:10.369 --rc genhtml_function_coverage=1 00:39:10.369 --rc genhtml_legend=1 00:39:10.369 --rc geninfo_all_blocks=1 00:39:10.369 --rc geninfo_unexecuted_blocks=1 00:39:10.369 00:39:10.369 ' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:10.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.369 --rc genhtml_branch_coverage=1 00:39:10.369 --rc genhtml_function_coverage=1 00:39:10.369 --rc genhtml_legend=1 00:39:10.369 --rc geninfo_all_blocks=1 00:39:10.369 --rc geninfo_unexecuted_blocks=1 00:39:10.369 00:39:10.369 ' 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.369 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.370 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:10.630 13:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:17.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:17.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.201 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:17.202 Found net devices under 0000:af:00.0: cvl_0_0 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:17.202 Found net devices under 0000:af:00.1: cvl_0_1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.202 13:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:39:17.202 00:39:17.202 --- 10.0.0.2 ping statistics --- 00:39:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.202 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:39:17.202 00:39:17.202 --- 10.0.0.1 ping statistics --- 00:39:17.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.202 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:17.202 only one NIC for nvmf test 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.202 rmmod nvme_tcp 00:39:17.202 rmmod nvme_fabrics 00:39:17.202 rmmod nvme_keyring 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:17.202 13:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.579 00:39:18.579 real 0m8.207s 00:39:18.579 user 0m1.855s 00:39:18.579 sys 0m4.368s 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:18.579 ************************************ 00:39:18.579 END TEST nvmf_target_multipath 00:39:18.579 ************************************ 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:18.579 ************************************ 00:39:18.579 START TEST nvmf_zcopy 00:39:18.579 ************************************ 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:18.579 * Looking for test storage... 00:39:18.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:18.579 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:18.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.839 --rc genhtml_branch_coverage=1 00:39:18.839 --rc genhtml_function_coverage=1 00:39:18.839 --rc genhtml_legend=1 00:39:18.839 --rc geninfo_all_blocks=1 00:39:18.839 --rc geninfo_unexecuted_blocks=1 00:39:18.839 00:39:18.839 ' 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:18.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.839 --rc genhtml_branch_coverage=1 00:39:18.839 --rc genhtml_function_coverage=1 00:39:18.839 --rc genhtml_legend=1 00:39:18.839 --rc geninfo_all_blocks=1 00:39:18.839 --rc geninfo_unexecuted_blocks=1 00:39:18.839 00:39:18.839 ' 00:39:18.839 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:18.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.839 --rc genhtml_branch_coverage=1 00:39:18.839 --rc genhtml_function_coverage=1 00:39:18.839 --rc genhtml_legend=1 00:39:18.839 --rc geninfo_all_blocks=1 00:39:18.839 --rc geninfo_unexecuted_blocks=1 00:39:18.839 00:39:18.839 ' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:18.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.840 --rc genhtml_branch_coverage=1 00:39:18.840 --rc genhtml_function_coverage=1 00:39:18.840 --rc genhtml_legend=1 00:39:18.840 --rc geninfo_all_blocks=1 00:39:18.840 --rc geninfo_unexecuted_blocks=1 00:39:18.840 00:39:18.840 ' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.840 13:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:25.411 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:25.411 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:25.411 Found net devices under 0000:af:00.0: cvl_0_0 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:25.411 Found net devices under 0000:af:00.1: cvl_0_1 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:25.411 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:25.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:25.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:39:25.412 00:39:25.412 --- 10.0.0.2 ping statistics --- 00:39:25.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.412 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:25.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:25.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:39:25.412 00:39:25.412 --- 10.0.0.1 ping statistics --- 00:39:25.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.412 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1262688 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1262688 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1262688 ']' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 [2024-12-15 13:19:32.425707] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:25.412 [2024-12-15 13:19:32.426714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:25.412 [2024-12-15 13:19:32.426754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.412 [2024-12-15 13:19:32.506478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.412 [2024-12-15 13:19:32.527138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.412 [2024-12-15 13:19:32.527176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.412 [2024-12-15 13:19:32.527184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.412 [2024-12-15 13:19:32.527189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.412 [2024-12-15 13:19:32.527194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.412 [2024-12-15 13:19:32.527654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:25.412 [2024-12-15 13:19:32.589607] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:25.412 [2024-12-15 13:19:32.589830] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 [2024-12-15 13:19:32.668324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 [2024-12-15 13:19:32.696546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 malloc0 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:25.412 { 00:39:25.412 "params": { 00:39:25.412 "name": "Nvme$subsystem", 00:39:25.412 "trtype": "$TEST_TRANSPORT", 00:39:25.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:25.412 "adrfam": "ipv4", 00:39:25.412 "trsvcid": "$NVMF_PORT", 00:39:25.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:25.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:25.412 "hdgst": ${hdgst:-false}, 00:39:25.412 "ddgst": ${ddgst:-false} 00:39:25.412 }, 00:39:25.412 "method": "bdev_nvme_attach_controller" 00:39:25.412 } 00:39:25.412 EOF 00:39:25.412 )") 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:25.412 13:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:25.412 "params": { 00:39:25.412 "name": "Nvme1", 00:39:25.412 "trtype": "tcp", 00:39:25.412 "traddr": "10.0.0.2", 00:39:25.413 "adrfam": "ipv4", 00:39:25.413 "trsvcid": "4420", 00:39:25.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:25.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:25.413 "hdgst": false, 00:39:25.413 "ddgst": false 00:39:25.413 }, 00:39:25.413 "method": "bdev_nvme_attach_controller" 00:39:25.413 }' 00:39:25.413 [2024-12-15 13:19:32.794248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:25.413 [2024-12-15 13:19:32.794294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262875 ] 00:39:25.413 [2024-12-15 13:19:32.869007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.413 [2024-12-15 13:19:32.891185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.413 Running I/O for 10 seconds... 00:39:27.283 8621.00 IOPS, 67.35 MiB/s [2024-12-15T12:19:36.127Z] 8709.00 IOPS, 68.04 MiB/s [2024-12-15T12:19:37.063Z] 8718.33 IOPS, 68.11 MiB/s [2024-12-15T12:19:38.440Z] 8724.00 IOPS, 68.16 MiB/s [2024-12-15T12:19:39.376Z] 8727.40 IOPS, 68.18 MiB/s [2024-12-15T12:19:40.311Z] 8738.67 IOPS, 68.27 MiB/s [2024-12-15T12:19:41.253Z] 8739.57 IOPS, 68.28 MiB/s [2024-12-15T12:19:42.189Z] 8723.12 IOPS, 68.15 MiB/s [2024-12-15T12:19:43.125Z] 8724.56 IOPS, 68.16 MiB/s [2024-12-15T12:19:43.125Z] 8725.50 IOPS, 68.17 MiB/s 00:39:35.218 Latency(us) 00:39:35.218 [2024-12-15T12:19:43.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.218 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:35.218 Verification LBA range: start 0x0 length 0x1000 00:39:35.218 Nvme1n1 : 10.01 8727.18 68.18 0.00 0.00 14624.29 1451.15 20846.69 00:39:35.218 [2024-12-15T12:19:43.125Z] =================================================================================================================== 00:39:35.218 [2024-12-15T12:19:43.125Z] Total : 8727.18 68.18 0.00 0.00 14624.29 1451.15 20846.69 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1264477 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:35.477 { 00:39:35.477 "params": { 00:39:35.477 "name": "Nvme$subsystem", 00:39:35.477 "trtype": "$TEST_TRANSPORT", 00:39:35.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:35.477 "adrfam": "ipv4", 00:39:35.477 "trsvcid": "$NVMF_PORT", 00:39:35.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:35.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:35.477 "hdgst": ${hdgst:-false}, 00:39:35.477 "ddgst": ${ddgst:-false} 00:39:35.477 }, 00:39:35.477 "method": "bdev_nvme_attach_controller" 00:39:35.477 } 00:39:35.477 EOF 00:39:35.477 )") 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:35.477 [2024-12-15 13:19:43.235994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.236033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:35.477 13:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:35.477 "params": { 00:39:35.477 "name": "Nvme1", 00:39:35.477 "trtype": "tcp", 00:39:35.477 "traddr": "10.0.0.2", 00:39:35.477 "adrfam": "ipv4", 00:39:35.477 "trsvcid": "4420", 00:39:35.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:35.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:35.477 "hdgst": false, 00:39:35.477 "ddgst": false 00:39:35.477 }, 00:39:35.477 "method": "bdev_nvme_attach_controller" 00:39:35.477 }' 00:39:35.477 [2024-12-15 13:19:43.247951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.247965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.259944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.259956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.271943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.271954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.277270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:35.477 [2024-12-15 13:19:43.277313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264477 ] 00:39:35.477 [2024-12-15 13:19:43.283942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.283953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.295942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.295952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.307944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.307954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.319942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.319952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.331942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.331953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.343942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.343953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.352773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.477 [2024-12-15 13:19:43.355943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.355953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.367943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.367957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.477 [2024-12-15 13:19:43.375018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.477 [2024-12-15 13:19:43.379954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.477 [2024-12-15 13:19:43.379973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.391965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.391990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.403955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.403969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.415949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.415965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.427944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.427956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.439946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.439960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.451955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.451974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.463952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.463970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.475951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.475966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.487947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.487962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.736 [2024-12-15 13:19:43.499946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.736 [2024-12-15 13:19:43.499959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.511942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.511952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.523942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.523952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.535995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.536009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.547942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.547953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.559941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.559951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.571941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.571951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.583950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.583966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.595941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.595951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.607941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.607950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.619943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.619953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.737 [2024-12-15 13:19:43.631949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.737 [2024-12-15 13:19:43.631967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.644023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.644040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 Running I/O for 5 seconds... 00:39:35.995 [2024-12-15 13:19:43.659762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.659782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.673890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.673909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.688452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.688471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.700498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.700517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.713602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.713621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.728345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.728363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.743614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.743632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.995 [2024-12-15 13:19:43.757089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.995 [2024-12-15 13:19:43.757107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.771761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.771780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.785576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.785595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.800282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.800300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.815710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.815729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.829430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.829453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.844135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.844154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.854837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.854855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.869400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.869419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.883400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.883419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.996 [2024-12-15 13:19:43.897618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:35.996 [2024-12-15 13:19:43.897637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.911795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.911814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.924664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.924683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.937705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.937725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.952246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.952265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.967897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.967918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.981185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.981205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:43.995791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:43.995810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.009100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.009119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.023997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.024016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.036125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.036144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.049158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.049177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.064049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.064068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.076818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.076843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.089184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.089203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.103769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.103789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.117806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.117832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.132280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.132299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.255 [2024-12-15 13:19:44.147689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.255 [2024-12-15 13:19:44.147708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.161735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.161755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.176244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.176264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.192049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.192068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.203757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.203776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.217084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.217103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.231548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.231567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.245679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.245698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.259928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.259948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.271021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.271040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.285229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.285251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.299730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.299749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.313565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.313584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.327831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.327851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.338586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.338605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.353116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.353135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.368220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.368239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.383765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.383783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.397424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.397442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.514 [2024-12-15 13:19:44.408304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.514 [2024-12-15 13:19:44.408322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.421987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.422009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.436433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.436454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.451485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.451505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.465870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.465888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.480254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.480272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.495986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.496005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.509903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.509922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.524571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.524590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.539392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.539410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.553734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.553761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.568480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.568499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.584494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.584512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.596866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.596885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.609350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.609369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.623933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.623952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.634990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.635008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.649430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.649449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 17097.00 IOPS, 133.57 MiB/s [2024-12-15T12:19:44.680Z] [2024-12-15 13:19:44.664182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.773 [2024-12-15 13:19:44.664201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:36.773 [2024-12-15 13:19:44.676051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:36.774 [2024-12-15 13:19:44.676070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.032 [2024-12-15 13:19:44.689945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.689964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.704677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.704697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.719732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.719751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.732961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.732980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.747767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.747786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.761569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.761587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.775753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.775772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.789774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.789792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.804254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.804273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.819694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.819717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.833814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.833838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.847983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.848003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.859033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.859051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.873897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.873916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.888387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.888405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.900702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.900720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.915525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.915544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.033 [2024-12-15 13:19:44.929780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.033 [2024-12-15 13:19:44.929800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:44.944466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:44.944486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:44.960982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:44.961001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:44.976082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:44.976101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:44.987542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:44.987560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.002025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:45.002043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.016782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:45.016800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.032541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:45.032560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.047537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:45.047560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.061795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.323 [2024-12-15 13:19:45.061814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.323 [2024-12-15 13:19:45.075922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.075941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.088742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.088760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.101841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.101859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.116252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.116271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.131886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.131904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.145677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.145695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.160065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.160084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.171101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.171134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.185058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.185075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.199939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.199958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.213813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.213837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.324 [2024-12-15 13:19:45.228204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.324 [2024-12-15 13:19:45.228222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.240833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.240851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.253786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.253803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.268094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.268112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.280517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.280535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.293470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.293487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.307831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.307849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.320397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.320413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.336013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.336031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.348747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.348770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.363877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.363896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.376081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.376100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.389956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.389975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.404377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.404397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.419408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.419428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.433662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.433681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.448144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.448163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.461162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.461181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.476105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.476122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.582 [2024-12-15 13:19:45.486618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.582 [2024-12-15 13:19:45.486636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.501710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.501731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.516078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.516097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.528307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.528326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.541957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.541977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.556135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.556154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.568564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.568585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.581029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.581049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.596081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.596099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.607677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.607697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.621701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.621720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.636075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.636094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.648347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.648366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 17082.00 IOPS, 133.45 MiB/s [2024-12-15T12:19:45.747Z] [2024-12-15 13:19:45.661271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.661290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.676104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.676124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.688003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.688022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.702125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.702145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.716792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.716812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.731645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.731664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:37.840 [2024-12-15 13:19:45.745613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:37.840 [2024-12-15 13:19:45.745633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.760570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.760590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.775621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.775641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.789742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.789761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.803961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.803981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.816349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.816367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.828970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.828988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.843799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.843818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.856945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.856968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.871999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.872018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.885239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.885258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.900175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.900192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.912015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.912033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.925897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.925915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.940232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.940249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.955743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.955763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.968431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.968449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.981548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.981566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.099 [2024-12-15 13:19:45.996117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.099 [2024-12-15 13:19:45.996136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.009285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.009305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.023815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.023842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.037339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.037358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.052043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.052062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.064786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.064804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.079370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.079388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.093084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.093103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.108043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.108062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.120370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.120393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.133115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.133133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.147476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.147495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.160750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.160768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.175393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.175413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.189899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.189917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.204068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.204087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.217739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.217758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.232064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.232083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.245689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.358 [2024-12-15 13:19:46.245708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.358 [2024-12-15 13:19:46.259868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.359 [2024-12-15 13:19:46.259889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.272498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.272517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.285429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.285447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.299889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.299908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.312417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.312435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.325175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.325194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.339727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.339762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.350638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.350657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.365083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.365102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.379161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.379184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.393209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.393227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.408144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.408163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.420738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.420757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.436327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.436345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.452134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.452153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.464712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.464731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.479712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.479732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.492691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.492710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.507743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.617 [2024-12-15 13:19:46.507761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.617 [2024-12-15 13:19:46.521669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.618 [2024-12-15 13:19:46.521688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.876 [2024-12-15 13:19:46.536261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.876 [2024-12-15 13:19:46.536279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.876 [2024-12-15 13:19:46.549665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.876 [2024-12-15 13:19:46.549684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.876 [2024-12-15 13:19:46.564219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.876 [2024-12-15 13:19:46.564237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.876 [2024-12-15 13:19:46.580110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.580130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.592367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.592385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.605700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.605719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.620108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.620127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.632328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.632346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.646069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.646092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 17118.67 IOPS, 133.74 MiB/s [2024-12-15T12:19:46.784Z] [2024-12-15 13:19:46.660197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.660215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.671189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.671208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.685728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.685746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.700233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.700251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.715946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.715965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.728485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.728503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.743407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.743425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.757524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.757542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:38.877 [2024-12-15 13:19:46.771546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:38.877 [2024-12-15 13:19:46.771564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.785767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.785786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.800385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.800403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.815547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.815568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.829932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.829951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.844248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.844267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.860045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.860065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.873981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.874001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.888639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.888658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.903521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.903540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.917895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.917914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.932343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.932362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.947603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.947623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.962167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.962187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.976653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.976673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:46.991070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:46.991089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:47.005339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:47.005358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:47.019877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:47.019897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.136 [2024-12-15 13:19:47.030610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.136 [2024-12-15 13:19:47.030628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.045750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.045772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.060062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.060083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.071485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.071504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.085452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.085471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.099965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.099984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.110727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.110746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.124981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.125001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.140281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.140299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.152554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.152572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.165809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.165835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.180180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.180199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.190407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.190425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.204906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.204925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.219628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.219647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.233137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.233155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.247881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.247901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.261523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.261541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.275858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.275877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.395 [2024-12-15 13:19:47.287218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.395 [2024-12-15 13:19:47.287236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.302365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.302385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.316959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.316978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.331638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.331656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.344599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.344617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.359666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.359684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.373653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.373672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.388163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.388182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.400001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.400020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.413770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.413788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.428133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.428155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.439242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.439260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.453580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.453598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.468681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.468699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.484024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.484044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.496863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.496883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.511493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.511512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.524846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.524864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.539744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.539762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.654 [2024-12-15 13:19:47.553157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.654 [2024-12-15 13:19:47.553175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.567884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.567904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.578814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.578839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.593592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.593611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.607857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.607876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.620566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.620584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.633763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.633782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.648062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.648081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 17097.50 IOPS, 133.57 MiB/s [2024-12-15T12:19:47.820Z] [2024-12-15 13:19:47.659311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.659331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.674178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.674197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.688627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.688652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.704007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.704025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.715152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.715171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.729924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.913 [2024-12-15 13:19:47.729943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.913 [2024-12-15 13:19:47.744480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.744499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.914 [2024-12-15 13:19:47.759679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.759698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.914 [2024-12-15 13:19:47.772949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.772967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.914 [2024-12-15 13:19:47.788017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.788036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.914 [2024-12-15 13:19:47.802149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.802168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:39.914 [2024-12-15 13:19:47.816284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:39.914 [2024-12-15 13:19:47.816302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.831605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.831625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.845685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.845704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.859958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.859976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.871201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.871219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.885539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.885557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.900107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.900126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.911081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.911100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.925795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.925814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.940473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.940492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.172 [2024-12-15 13:19:47.955636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.172 [2024-12-15 13:19:47.955659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:47.969984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:47.970002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:47.984627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:47.984645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:47.997591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:47.997609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:48.012375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:48.012393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:48.024666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:48.024684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:48.039998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:48.040016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:48.052997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:48.053015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.173 [2024-12-15 13:19:48.067889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.173 [2024-12-15 13:19:48.067907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.431 [2024-12-15 13:19:48.082202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.431 [2024-12-15 13:19:48.082223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.431 [2024-12-15 13:19:48.096784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.431 [2024-12-15 13:19:48.096802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.431 [2024-12-15 13:19:48.112310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.112329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.128155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.128174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.139821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.139845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.153884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.153902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.167987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.168005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.180486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.180504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.193344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.193362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.208348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.208367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.221595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.221613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.236134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.236153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.246988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.247007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.261848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.261867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.276432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.276452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.292041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.292061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.305753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.305772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.320444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.320462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.432 [2024-12-15 13:19:48.336402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.432 [2024-12-15 13:19:48.336421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.351667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.351687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.365964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.365983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.380536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.380555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.396123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.396143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.407187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.407206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.421943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.421963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.436744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.436763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.452112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.452132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.464596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.464615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.477140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.477159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.489430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.489451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.503929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.503949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.517116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.517135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.531616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.531636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.544994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.545013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.559801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.559820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.572557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.572575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.691 [2024-12-15 13:19:48.585623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.691 [2024-12-15 13:19:48.585643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.600568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.600588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.615818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.615847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.630125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.630144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.644526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.644545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 17079.20 IOPS, 133.43 MiB/s [2024-12-15T12:19:48.857Z] [2024-12-15 13:19:48.660187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.660206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 00:39:40.950 Latency(us) 00:39:40.950 [2024-12-15T12:19:48.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.950 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:40.950 Nvme1n1 : 5.01 17082.43 133.46 0.00 0.00 7486.19 2012.89 12670.29 00:39:40.950 [2024-12-15T12:19:48.857Z] =================================================================================================================== 00:39:40.950 [2024-12-15T12:19:48.857Z] Total : 17082.43 133.46 0.00 0.00 7486.19 2012.89 12670.29 00:39:40.950 [2024-12-15 13:19:48.671947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.671965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.683948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.683967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.695958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.695983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.707951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.707969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.719949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.719962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.731945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.731958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.743946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.743963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.755946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.755959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.767946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.767959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.779943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.779953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.791947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.791959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.803944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.803955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 [2024-12-15 13:19:48.815943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.950 [2024-12-15 13:19:48.815953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1264477) - No such process 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1264477 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.951 delay0 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.951 13:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:41.209 [2024-12-15 13:19:48.919347] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:49.327 Initializing NVMe Controllers 00:39:49.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:49.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:49.327 Initialization complete. Launching workers. 00:39:49.327 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6768 00:39:49.327 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7058, failed to submit 30 00:39:49.327 success 6951, unsuccessful 107, failed 0 00:39:49.327 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:49.327 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:49.327 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:49.327 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:49.327 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.328 rmmod nvme_tcp 00:39:49.328 rmmod nvme_fabrics 00:39:49.328 rmmod nvme_keyring 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1262688 ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1262688 ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1262688' 00:39:49.328 killing process with pid 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1262688 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.328 13:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.809 00:39:50.809 real 0m32.080s 00:39:50.809 user 0m41.869s 00:39:50.809 sys 0m12.548s 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:50.809 ************************************ 00:39:50.809 END TEST nvmf_zcopy 00:39:50.809 ************************************ 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:50.809 ************************************ 00:39:50.809 START TEST nvmf_nmic 00:39:50.809 ************************************ 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:50.809 * Looking for test storage... 00:39:50.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.809 --rc genhtml_branch_coverage=1 00:39:50.809 --rc genhtml_function_coverage=1 00:39:50.809 --rc genhtml_legend=1 00:39:50.809 --rc geninfo_all_blocks=1 00:39:50.809 --rc geninfo_unexecuted_blocks=1 00:39:50.809 00:39:50.809 ' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.809 --rc genhtml_branch_coverage=1 00:39:50.809 --rc genhtml_function_coverage=1 00:39:50.809 --rc genhtml_legend=1 00:39:50.809 --rc geninfo_all_blocks=1 00:39:50.809 --rc geninfo_unexecuted_blocks=1 00:39:50.809 00:39:50.809 ' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.809 --rc genhtml_branch_coverage=1 00:39:50.809 --rc genhtml_function_coverage=1 00:39:50.809 --rc genhtml_legend=1 00:39:50.809 --rc geninfo_all_blocks=1 00:39:50.809 --rc geninfo_unexecuted_blocks=1 00:39:50.809 00:39:50.809 ' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:50.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.809 --rc genhtml_branch_coverage=1 00:39:50.809 --rc genhtml_function_coverage=1 00:39:50.809 --rc genhtml_legend=1 00:39:50.809 --rc geninfo_all_blocks=1 00:39:50.809 --rc geninfo_unexecuted_blocks=1 00:39:50.809 00:39:50.809 ' 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.809 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.810 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:51.069 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:51.069 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.069 13:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.345 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:56.605 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:56.605 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:56.605 Found net devices under 0000:af:00.0: cvl_0_0 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:56.605 Found net devices under 0000:af:00.1: cvl_0_1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.605 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.864 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.864 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.864 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:39:56.865 00:39:56.865 --- 10.0.0.2 ping statistics --- 00:39:56.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.865 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:39:56.865 00:39:56.865 --- 10.0.0.1 ping statistics --- 00:39:56.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.865 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1269953 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1269953 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1269953 ']' 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.865 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:56.865 [2024-12-15 13:20:04.695952] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.865 [2024-12-15 13:20:04.696838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:56.865 [2024-12-15 13:20:04.696869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:57.124 [2024-12-15 13:20:04.776979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:57.124 [2024-12-15 13:20:04.801183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:57.125 [2024-12-15 13:20:04.801218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:57.125 [2024-12-15 13:20:04.801224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:57.125 [2024-12-15 13:20:04.801230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:57.125 [2024-12-15 13:20:04.801235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:57.125 [2024-12-15 13:20:04.802672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.125 [2024-12-15 13:20:04.802782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:57.125 [2024-12-15 13:20:04.802893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.125 [2024-12-15 13:20:04.802893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:57.125 [2024-12-15 13:20:04.867020] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.125 [2024-12-15 13:20:04.868159] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:57.125 [2024-12-15 13:20:04.868551] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.125 [2024-12-15 13:20:04.868917] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.125 [2024-12-15 13:20:04.868952] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 [2024-12-15 13:20:04.935671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 Malloc0 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.125 [2024-12-15 13:20:05.019842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:57.125 test case1: single bdev can't be used in multiple subsystems 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.125 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:57.382 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.383 [2024-12-15 13:20:05.047384] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:57.383 [2024-12-15 13:20:05.047409] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:57.383 [2024-12-15 13:20:05.047418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:57.383 request: 00:39:57.383 { 00:39:57.383 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:57.383 "namespace": { 00:39:57.383 "bdev_name": "Malloc0", 00:39:57.383 "no_auto_visible": false, 00:39:57.383 "hide_metadata": false 00:39:57.383 }, 00:39:57.383 "method": "nvmf_subsystem_add_ns", 00:39:57.383 "req_id": 1 00:39:57.383 } 00:39:57.383 Got JSON-RPC error response 00:39:57.383 response: 00:39:57.383 { 00:39:57.383 "code": -32602, 00:39:57.383 "message": "Invalid parameters" 00:39:57.383 } 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:57.383 Adding namespace failed - expected result. 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:57.383 test case2: host connect to nvmf target in multiple paths 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.383 [2024-12-15 13:20:05.059473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:57.383 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:57.947 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:57.947 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:57.947 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:57.947 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:57.947 13:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:59.847 13:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:59.847 [global] 00:39:59.847 thread=1 00:39:59.847 invalidate=1 00:39:59.847 rw=write 00:39:59.847 time_based=1 00:39:59.847 runtime=1 00:39:59.847 ioengine=libaio 00:39:59.847 direct=1 00:39:59.847 bs=4096 00:39:59.847 iodepth=1 00:39:59.848 norandommap=0 00:39:59.848 numjobs=1 00:39:59.848 00:39:59.848 verify_dump=1 00:39:59.848 verify_backlog=512 00:39:59.848 verify_state_save=0 00:39:59.848 do_verify=1 00:39:59.848 verify=crc32c-intel 00:39:59.848 [job0] 00:39:59.848 filename=/dev/nvme0n1 00:39:59.848 Could not set queue depth (nvme0n1) 00:40:00.105 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.105 fio-3.35 00:40:00.105 Starting 1 thread 00:40:01.478 00:40:01.478 job0: (groupid=0, jobs=1): err= 0: pid=1270557: Sun Dec 15 13:20:09 2024 00:40:01.478 read: IOPS=2466, BW=9866KiB/s (10.1MB/s)(9876KiB/1001msec) 00:40:01.478 slat (nsec): min=6820, max=42714, avg=7897.77, stdev=1602.84 00:40:01.478 clat (usec): min=185, max=419, avg=215.64, stdev=20.91 00:40:01.478 lat (usec): min=193, max=429, avg=223.54, stdev=21.00 00:40:01.478 clat percentiles (usec): 00:40:01.478 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 196], 20.00th=[ 198], 00:40:01.478 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 210], 00:40:01.478 | 70.00th=[ 219], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 251], 00:40:01.478 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 310], 99.95th=[ 330], 00:40:01.478 | 99.99th=[ 420] 00:40:01.478 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:01.478 slat (nsec): min=9611, max=44351, avg=10847.66, stdev=1786.97 00:40:01.478 clat (usec): min=117, max=359, avg=158.43, stdev=42.02 00:40:01.478 lat (usec): min=134, max=398, avg=169.27, stdev=42.15 00:40:01.478 clat percentiles (usec): 00:40:01.478 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:40:01.478 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:40:01.478 | 70.00th=[ 143], 80.00th=[ 198], 90.00th=[ 241], 95.00th=[ 243], 00:40:01.478 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 289], 00:40:01.478 | 99.99th=[ 359] 00:40:01.478 bw ( KiB/s): min=11624, max=11624, per=100.00%, avg=11624.00, stdev= 0.00, samples=1 00:40:01.478 iops : min= 2906, max= 2906, avg=2906.00, stdev= 0.00, samples=1 00:40:01.478 lat (usec) : 250=96.58%, 500=3.42% 00:40:01.478 cpu : usr=3.30%, sys=8.50%, ctx=5029, majf=0, minf=1 00:40:01.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.478 issued rwts: total=2469,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:01.478 00:40:01.478 Run status group 0 (all jobs): 00:40:01.478 READ: bw=9866KiB/s (10.1MB/s), 9866KiB/s-9866KiB/s (10.1MB/s-10.1MB/s), io=9876KiB (10.1MB), run=1001-1001msec 00:40:01.478 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:01.478 00:40:01.478 Disk stats (read/write): 00:40:01.478 nvme0n1: ios=2098/2506, merge=0/0, ticks=447/381, in_queue=828, util=91.68% 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:01.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:01.478 rmmod nvme_tcp 00:40:01.478 rmmod nvme_fabrics 00:40:01.478 rmmod nvme_keyring 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1269953 ']' 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1269953 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1269953 ']' 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1269953 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1269953 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1269953' 00:40:01.478 killing process with pid 1269953 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1269953 00:40:01.478 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1269953 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.737 13:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.652 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.911 00:40:03.911 real 0m13.066s 00:40:03.911 user 0m24.170s 00:40:03.911 sys 0m6.061s 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:03.911 ************************************ 00:40:03.911 END TEST nvmf_nmic 00:40:03.911 ************************************ 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:03.911 ************************************ 00:40:03.911 START TEST nvmf_fio_target 00:40:03.911 ************************************ 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:03.911 * Looking for test storage... 00:40:03.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:03.911 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.912 --rc genhtml_branch_coverage=1 00:40:03.912 --rc genhtml_function_coverage=1 00:40:03.912 --rc genhtml_legend=1 00:40:03.912 --rc geninfo_all_blocks=1 00:40:03.912 --rc geninfo_unexecuted_blocks=1 00:40:03.912 00:40:03.912 ' 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.912 --rc genhtml_branch_coverage=1 00:40:03.912 --rc genhtml_function_coverage=1 00:40:03.912 --rc genhtml_legend=1 00:40:03.912 --rc geninfo_all_blocks=1 00:40:03.912 --rc geninfo_unexecuted_blocks=1 00:40:03.912 00:40:03.912 ' 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.912 --rc genhtml_branch_coverage=1 00:40:03.912 --rc genhtml_function_coverage=1 00:40:03.912 --rc genhtml_legend=1 00:40:03.912 --rc geninfo_all_blocks=1 00:40:03.912 --rc geninfo_unexecuted_blocks=1 00:40:03.912 00:40:03.912 ' 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.912 --rc genhtml_branch_coverage=1 00:40:03.912 --rc genhtml_function_coverage=1 00:40:03.912 --rc genhtml_legend=1 00:40:03.912 --rc geninfo_all_blocks=1 00:40:03.912 --rc geninfo_unexecuted_blocks=1 00:40:03.912 00:40:03.912 ' 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:03.912 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.172 13:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:10.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:10.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:10.742 Found net devices under 0000:af:00.0: cvl_0_0 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:10.742 Found net devices under 0000:af:00.1: cvl_0_1 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.742 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:40:10.743 00:40:10.743 --- 10.0.0.2 ping statistics --- 00:40:10.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.743 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:40:10.743 00:40:10.743 --- 10.0.0.1 ping statistics --- 00:40:10.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.743 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1274241 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1274241 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1274241 ']' 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:10.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:10.743 13:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.743 [2024-12-15 13:20:17.824608] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:10.743 [2024-12-15 13:20:17.825520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:10.743 [2024-12-15 13:20:17.825554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:10.743 [2024-12-15 13:20:17.905299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:10.743 [2024-12-15 13:20:17.927433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.743 [2024-12-15 13:20:17.927473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.743 [2024-12-15 13:20:17.927481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.743 [2024-12-15 13:20:17.927490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.743 [2024-12-15 13:20:17.927495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.743 [2024-12-15 13:20:17.928788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.743 [2024-12-15 13:20:17.928935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.743 [2024-12-15 13:20:17.928936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:10.743 [2024-12-15 13:20:17.928901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.743 [2024-12-15 13:20:17.992294] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.743 [2024-12-15 13:20:17.993327] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.743 [2024-12-15 13:20:17.993436] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:10.743 [2024-12-15 13:20:17.993891] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.743 [2024-12-15 13:20:17.993922] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:10.743 [2024-12-15 13:20:18.229756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:10.743 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.003 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:11.003 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.262 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:11.262 13:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.262 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:11.262 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:11.520 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.779 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:11.780 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:12.039 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:12.039 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:12.297 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:12.297 13:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:12.297 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:12.557 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:12.557 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:12.815 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:12.815 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:13.072 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.072 [2024-12-15 13:20:20.905688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.072 13:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:13.329 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:13.586 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:13.844 13:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:15.742 13:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:15.742 [global] 00:40:15.742 thread=1 00:40:15.742 invalidate=1 00:40:15.742 rw=write 00:40:15.742 time_based=1 00:40:15.742 runtime=1 00:40:15.742 ioengine=libaio 00:40:15.742 direct=1 00:40:15.742 bs=4096 00:40:15.742 iodepth=1 00:40:15.742 norandommap=0 00:40:15.742 numjobs=1 00:40:15.742 00:40:15.742 verify_dump=1 00:40:15.742 verify_backlog=512 00:40:15.742 verify_state_save=0 00:40:15.742 do_verify=1 00:40:15.742 verify=crc32c-intel 00:40:15.742 [job0] 00:40:15.742 filename=/dev/nvme0n1 00:40:15.742 [job1] 00:40:15.742 filename=/dev/nvme0n2 00:40:15.743 [job2] 00:40:15.743 filename=/dev/nvme0n3 00:40:15.743 [job3] 00:40:15.743 filename=/dev/nvme0n4 00:40:16.000 Could not set queue depth (nvme0n1) 00:40:16.000 Could not set queue depth (nvme0n2) 00:40:16.000 Could not set queue depth (nvme0n3) 00:40:16.000 Could not set queue depth (nvme0n4) 00:40:16.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.257 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.257 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.257 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.257 fio-3.35 00:40:16.257 Starting 4 threads 00:40:17.630 00:40:17.630 job0: (groupid=0, jobs=1): err= 0: pid=1275346: Sun Dec 15 13:20:25 2024 00:40:17.630 read: IOPS=2096, BW=8388KiB/s (8589kB/s)(8396KiB/1001msec) 00:40:17.630 slat (nsec): min=6736, max=24330, avg=9132.56, stdev=1538.86 00:40:17.630 clat (usec): min=198, max=625, avg=240.08, stdev=18.69 00:40:17.630 lat (usec): min=207, max=633, avg=249.21, stdev=18.75 00:40:17.630 clat percentiles (usec): 00:40:17.630 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:40:17.630 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:40:17.630 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 258], 00:40:17.630 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 562], 99.95th=[ 586], 00:40:17.630 | 99.99th=[ 627] 00:40:17.630 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:17.630 slat (nsec): min=9764, max=43038, avg=13002.31, stdev=2099.66 00:40:17.630 clat (usec): min=134, max=277, avg=167.19, stdev=16.47 00:40:17.630 lat (usec): min=146, max=290, avg=180.19, stdev=16.55 00:40:17.630 clat percentiles (usec): 00:40:17.630 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:40:17.630 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:40:17.630 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 200], 00:40:17.630 | 99.00th=[ 223], 99.50th=[ 265], 99.90th=[ 269], 99.95th=[ 277], 00:40:17.630 | 99.99th=[ 277] 00:40:17.630 bw ( KiB/s): min=10768, max=10768, per=29.89%, avg=10768.00, stdev= 0.00, samples=1 00:40:17.630 iops : min= 2692, max= 2692, avg=2692.00, stdev= 0.00, samples=1 00:40:17.630 lat (usec) : 250=93.35%, 500=6.59%, 750=0.06% 00:40:17.630 cpu : usr=4.70%, sys=7.30%, ctx=4661, majf=0, minf=1 00:40:17.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.630 issued rwts: total=2099,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.630 job1: (groupid=0, jobs=1): err= 0: pid=1275347: Sun Dec 15 13:20:25 2024 00:40:17.630 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:17.630 slat (nsec): min=7153, max=61919, avg=8921.73, stdev=1983.14 00:40:17.630 clat (usec): min=215, max=750, avg=247.57, stdev=25.54 00:40:17.630 lat (usec): min=223, max=759, avg=256.49, stdev=26.06 00:40:17.630 clat percentiles (usec): 00:40:17.630 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:40:17.630 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:40:17.631 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:40:17.631 | 99.00th=[ 306], 99.50th=[ 449], 99.90th=[ 570], 99.95th=[ 619], 00:40:17.631 | 99.99th=[ 750] 00:40:17.631 write: IOPS=2337, BW=9351KiB/s (9575kB/s)(9360KiB/1001msec); 0 zone resets 00:40:17.631 slat (nsec): min=10716, max=51090, avg=12814.13, stdev=2342.88 00:40:17.631 clat (usec): min=141, max=318, avg=184.36, stdev=29.57 00:40:17.631 lat (usec): min=152, max=369, avg=197.17, stdev=29.80 00:40:17.631 clat percentiles (usec): 00:40:17.631 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:40:17.631 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:40:17.631 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 277], 00:40:17.631 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 302], 99.95th=[ 306], 00:40:17.631 | 99.99th=[ 318] 00:40:17.631 bw ( KiB/s): min= 9704, max= 9704, per=26.94%, avg=9704.00, stdev= 0.00, samples=1 00:40:17.631 iops : min= 2426, max= 2426, avg=2426.00, stdev= 0.00, samples=1 00:40:17.631 lat (usec) : 250=82.89%, 500=17.02%, 750=0.07%, 1000=0.02% 00:40:17.631 cpu : usr=3.30%, sys=7.90%, ctx=4390, majf=0, minf=1 00:40:17.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 issued rwts: total=2048,2340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.631 job2: (groupid=0, jobs=1): err= 0: pid=1275348: Sun Dec 15 13:20:25 2024 00:40:17.631 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:17.631 slat (nsec): min=6620, max=19958, avg=7621.97, stdev=857.17 00:40:17.631 clat (usec): min=182, max=585, avg=288.39, stdev=76.90 00:40:17.631 lat (usec): min=189, max=592, avg=296.01, stdev=76.91 00:40:17.631 clat percentiles (usec): 00:40:17.631 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 237], 00:40:17.631 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 269], 00:40:17.631 | 70.00th=[ 281], 80.00th=[ 314], 90.00th=[ 445], 95.00th=[ 469], 00:40:17.631 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 562], 99.95th=[ 570], 00:40:17.631 | 99.99th=[ 586] 00:40:17.631 write: IOPS=2072, BW=8292KiB/s (8491kB/s)(8300KiB/1001msec); 0 zone resets 00:40:17.631 slat (nsec): min=4890, max=31447, avg=10827.18, stdev=1324.75 00:40:17.631 clat (usec): min=123, max=2188, avg=174.25, stdev=50.93 00:40:17.631 lat (usec): min=134, max=2199, avg=185.08, stdev=50.92 00:40:17.631 clat percentiles (usec): 00:40:17.631 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:40:17.631 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 180], 00:40:17.631 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 217], 00:40:17.631 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 249], 99.95th=[ 249], 00:40:17.631 | 99.99th=[ 2180] 00:40:17.631 bw ( KiB/s): min= 8880, max= 8880, per=24.65%, avg=8880.00, stdev= 0.00, samples=1 00:40:17.631 iops : min= 2220, max= 2220, avg=2220.00, stdev= 0.00, samples=1 00:40:17.631 lat (usec) : 250=66.94%, 500=32.55%, 750=0.49% 00:40:17.631 lat (msec) : 4=0.02% 00:40:17.631 cpu : usr=1.60%, sys=4.60%, ctx=4125, majf=0, minf=1 00:40:17.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 issued rwts: total=2048,2075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.631 job3: (groupid=0, jobs=1): err= 0: pid=1275349: Sun Dec 15 13:20:25 2024 00:40:17.631 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:40:17.631 slat (nsec): min=7009, max=22920, avg=9369.54, stdev=1249.63 00:40:17.631 clat (usec): min=213, max=553, avg=340.75, stdev=90.08 00:40:17.631 lat (usec): min=221, max=563, avg=350.12, stdev=90.38 00:40:17.631 clat percentiles (usec): 00:40:17.631 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 262], 00:40:17.631 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 338], 00:40:17.631 | 70.00th=[ 359], 80.00th=[ 445], 90.00th=[ 502], 95.00th=[ 506], 00:40:17.631 | 99.00th=[ 519], 99.50th=[ 523], 99.90th=[ 529], 99.95th=[ 553], 00:40:17.631 | 99.99th=[ 553] 00:40:17.631 write: IOPS=2037, BW=8152KiB/s (8347kB/s)(8160KiB/1001msec); 0 zone resets 00:40:17.631 slat (nsec): min=9822, max=35772, avg=12004.90, stdev=1888.49 00:40:17.631 clat (usec): min=142, max=365, avg=209.46, stdev=47.34 00:40:17.631 lat (usec): min=153, max=376, avg=221.46, stdev=47.30 00:40:17.631 clat percentiles (usec): 00:40:17.631 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:40:17.631 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 202], 00:40:17.631 | 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 310], 00:40:17.631 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 363], 00:40:17.631 | 99.99th=[ 367] 00:40:17.631 bw ( KiB/s): min= 8192, max= 8192, per=22.74%, avg=8192.00, stdev= 0.00, samples=1 00:40:17.631 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:17.631 lat (usec) : 250=51.12%, 500=44.32%, 750=4.56% 00:40:17.631 cpu : usr=2.20%, sys=6.80%, ctx=3576, majf=0, minf=2 00:40:17.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.631 issued rwts: total=1536,2040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.631 00:40:17.631 Run status group 0 (all jobs): 00:40:17.631 READ: bw=30.2MiB/s (31.6MB/s), 6138KiB/s-8388KiB/s (6285kB/s-8589kB/s), io=30.2MiB (31.7MB), run=1001-1001msec 00:40:17.631 WRITE: bw=35.2MiB/s (36.9MB/s), 8152KiB/s-9.99MiB/s (8347kB/s-10.5MB/s), io=35.2MiB (36.9MB), run=1001-1001msec 00:40:17.631 00:40:17.631 Disk stats (read/write): 00:40:17.631 nvme0n1: ios=1974/2048, merge=0/0, ticks=870/310, in_queue=1180, util=85.77% 00:40:17.631 nvme0n2: ios=1746/2048, merge=0/0, ticks=864/351, in_queue=1215, util=89.93% 00:40:17.631 nvme0n3: ios=1561/2020, merge=0/0, ticks=1356/350, in_queue=1706, util=93.55% 00:40:17.631 nvme0n4: ios=1444/1536, merge=0/0, ticks=528/319, in_queue=847, util=95.28% 00:40:17.631 13:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:17.631 [global] 00:40:17.631 thread=1 00:40:17.631 invalidate=1 00:40:17.631 rw=randwrite 00:40:17.631 time_based=1 00:40:17.631 runtime=1 00:40:17.631 ioengine=libaio 00:40:17.631 direct=1 00:40:17.631 bs=4096 00:40:17.631 iodepth=1 00:40:17.631 norandommap=0 00:40:17.631 numjobs=1 00:40:17.631 00:40:17.631 verify_dump=1 00:40:17.631 verify_backlog=512 00:40:17.631 verify_state_save=0 00:40:17.631 do_verify=1 00:40:17.631 verify=crc32c-intel 00:40:17.631 [job0] 00:40:17.631 filename=/dev/nvme0n1 00:40:17.631 [job1] 00:40:17.631 filename=/dev/nvme0n2 00:40:17.631 [job2] 00:40:17.631 filename=/dev/nvme0n3 00:40:17.631 [job3] 00:40:17.631 filename=/dev/nvme0n4 00:40:17.631 Could not set queue depth (nvme0n1) 00:40:17.631 Could not set queue depth (nvme0n2) 00:40:17.631 Could not set queue depth (nvme0n3) 00:40:17.631 Could not set queue depth (nvme0n4) 00:40:17.631 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.631 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.631 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.631 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:17.631 fio-3.35 00:40:17.631 Starting 4 threads 00:40:19.006 00:40:19.006 job0: (groupid=0, jobs=1): err= 0: pid=1275714: Sun Dec 15 13:20:26 2024 00:40:19.006 read: IOPS=1506, BW=6027KiB/s (6172kB/s)(6160KiB/1022msec) 00:40:19.006 slat (nsec): min=6869, max=24540, avg=7884.48, stdev=1147.30 00:40:19.006 clat (usec): min=195, max=41081, avg=399.90, stdev=2538.26 00:40:19.006 lat (usec): min=204, max=41090, avg=407.79, stdev=2538.96 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:40:19.006 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 245], 00:40:19.006 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:40:19.006 | 99.00th=[ 277], 99.50th=[ 408], 99.90th=[41157], 99.95th=[41157], 00:40:19.006 | 99.99th=[41157] 00:40:19.006 write: IOPS=2003, BW=8016KiB/s (8208kB/s)(8192KiB/1022msec); 0 zone resets 00:40:19.006 slat (nsec): min=9407, max=42191, avg=11280.75, stdev=1613.70 00:40:19.006 clat (usec): min=120, max=498, avg=175.82, stdev=46.73 00:40:19.006 lat (usec): min=130, max=509, avg=187.10, stdev=46.70 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:40:19.006 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 161], 00:40:19.006 | 70.00th=[ 196], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 245], 00:40:19.006 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 383], 99.95th=[ 388], 00:40:19.006 | 99.99th=[ 498] 00:40:19.006 bw ( KiB/s): min= 8192, max= 8192, per=37.20%, avg=8192.00, stdev= 0.00, samples=2 00:40:19.006 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:40:19.006 lat (usec) : 250=90.97%, 500=8.84% 00:40:19.006 lat (msec) : 4=0.03%, 50=0.17% 00:40:19.006 cpu : usr=2.55%, sys=3.04%, ctx=3592, majf=0, minf=1 00:40:19.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.006 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.006 job1: (groupid=0, jobs=1): err= 0: pid=1275715: Sun Dec 15 13:20:26 2024 00:40:19.006 read: IOPS=91, BW=368KiB/s (376kB/s)(376KiB/1023msec) 00:40:19.006 slat (nsec): min=6519, max=23875, avg=10830.81, stdev=6211.70 00:40:19.006 clat (usec): min=227, max=41314, avg=9597.78, stdev=17053.47 00:40:19.006 lat (usec): min=234, max=41324, avg=9608.61, stdev=17057.71 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 245], 00:40:19.006 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 260], 00:40:19.006 | 70.00th=[ 400], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:40:19.006 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:19.006 | 99.99th=[41157] 00:40:19.006 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:40:19.006 slat (nsec): min=10383, max=37272, avg=11496.49, stdev=1523.56 00:40:19.006 clat (usec): min=130, max=532, avg=217.66, stdev=41.58 00:40:19.006 lat (usec): min=141, max=543, avg=229.16, stdev=41.74 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 186], 00:40:19.006 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 227], 00:40:19.006 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:40:19.006 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 529], 99.95th=[ 529], 00:40:19.006 | 99.99th=[ 529] 00:40:19.006 bw ( KiB/s): min= 4096, max= 4096, per=18.60%, avg=4096.00, stdev= 0.00, samples=1 00:40:19.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:19.006 lat (usec) : 250=72.11%, 500=23.93%, 750=0.33% 00:40:19.006 lat (msec) : 50=3.63% 00:40:19.006 cpu : usr=0.39%, sys=0.59%, ctx=606, majf=0, minf=1 00:40:19.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.006 issued rwts: total=94,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.006 job2: (groupid=0, jobs=1): err= 0: pid=1275716: Sun Dec 15 13:20:26 2024 00:40:19.006 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:40:19.006 slat (nsec): min=9353, max=24842, avg=23306.00, stdev=3180.24 00:40:19.006 clat (usec): min=40869, max=41987, avg=41027.86, stdev=232.29 00:40:19.006 lat (usec): min=40893, max=42011, avg=41051.17, stdev=231.60 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:19.006 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:19.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:19.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:19.006 | 99.99th=[42206] 00:40:19.006 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:40:19.006 slat (nsec): min=9505, max=38180, avg=10455.91, stdev=1640.63 00:40:19.006 clat (usec): min=157, max=371, avg=192.17, stdev=17.11 00:40:19.006 lat (usec): min=168, max=381, avg=202.63, stdev=17.47 00:40:19.006 clat percentiles (usec): 00:40:19.006 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:40:19.006 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:40:19.006 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 217], 00:40:19.006 | 99.00th=[ 231], 99.50th=[ 251], 99.90th=[ 371], 99.95th=[ 371], 00:40:19.006 | 99.99th=[ 371] 00:40:19.006 bw ( KiB/s): min= 4096, max= 4096, per=18.60%, avg=4096.00, stdev= 0.00, samples=1 00:40:19.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:19.006 lat (usec) : 250=95.32%, 500=0.56% 00:40:19.006 lat (msec) : 50=4.12% 00:40:19.006 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:40:19.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.007 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.007 job3: (groupid=0, jobs=1): err= 0: pid=1275717: Sun Dec 15 13:20:26 2024 00:40:19.007 read: IOPS=2484, BW=9938KiB/s (10.2MB/s)(9948KiB/1001msec) 00:40:19.007 slat (nsec): min=6226, max=24480, avg=7317.04, stdev=1027.23 00:40:19.007 clat (usec): min=170, max=450, avg=214.70, stdev=19.46 00:40:19.007 lat (usec): min=177, max=457, avg=222.01, stdev=19.45 00:40:19.007 clat percentiles (usec): 00:40:19.007 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 202], 00:40:19.007 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:40:19.007 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 249], 00:40:19.007 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 293], 00:40:19.007 | 99.99th=[ 453] 00:40:19.007 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:19.007 slat (nsec): min=8810, max=40328, avg=10025.76, stdev=1160.47 00:40:19.007 clat (usec): min=118, max=287, avg=160.35, stdev=24.20 00:40:19.007 lat (usec): min=129, max=297, avg=170.37, stdev=24.29 00:40:19.007 clat percentiles (usec): 00:40:19.007 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 143], 00:40:19.007 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:40:19.007 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 210], 00:40:19.007 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 273], 99.95th=[ 281], 00:40:19.007 | 99.99th=[ 289] 00:40:19.007 bw ( KiB/s): min=12288, max=12288, per=55.80%, avg=12288.00, stdev= 0.00, samples=1 00:40:19.007 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:19.007 lat (usec) : 250=97.68%, 500=2.32% 00:40:19.007 cpu : usr=2.90%, sys=4.30%, ctx=5047, majf=0, minf=2 00:40:19.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:19.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:19.007 issued rwts: total=2487,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:19.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:19.007 00:40:19.007 Run status group 0 (all jobs): 00:40:19.007 READ: bw=15.8MiB/s (16.6MB/s), 87.3KiB/s-9938KiB/s (89.4kB/s-10.2MB/s), io=16.2MiB (17.0MB), run=1001-1023msec 00:40:19.007 WRITE: bw=21.5MiB/s (22.6MB/s), 2002KiB/s-9.99MiB/s (2050kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1023msec 00:40:19.007 00:40:19.007 Disk stats (read/write): 00:40:19.007 nvme0n1: ios=1572/2023, merge=0/0, ticks=1399/353, in_queue=1752, util=99.70% 00:40:19.007 nvme0n2: ios=82/512, merge=0/0, ticks=823/109, in_queue=932, util=91.05% 00:40:19.007 nvme0n3: ios=76/512, merge=0/0, ticks=1209/99, in_queue=1308, util=98.33% 00:40:19.007 nvme0n4: ios=2048/2356, merge=0/0, ticks=412/368, in_queue=780, util=89.72% 00:40:19.007 13:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:19.007 [global] 00:40:19.007 thread=1 00:40:19.007 invalidate=1 00:40:19.007 rw=write 00:40:19.007 time_based=1 00:40:19.007 runtime=1 00:40:19.007 ioengine=libaio 00:40:19.007 direct=1 00:40:19.007 bs=4096 00:40:19.007 iodepth=128 00:40:19.007 norandommap=0 00:40:19.007 numjobs=1 00:40:19.007 00:40:19.007 verify_dump=1 00:40:19.007 verify_backlog=512 00:40:19.007 verify_state_save=0 00:40:19.007 do_verify=1 00:40:19.007 verify=crc32c-intel 00:40:19.007 [job0] 00:40:19.007 filename=/dev/nvme0n1 00:40:19.007 [job1] 00:40:19.007 filename=/dev/nvme0n2 00:40:19.007 [job2] 00:40:19.007 filename=/dev/nvme0n3 00:40:19.007 [job3] 00:40:19.007 filename=/dev/nvme0n4 00:40:19.007 Could not set queue depth (nvme0n1) 00:40:19.007 Could not set queue depth (nvme0n2) 00:40:19.007 Could not set queue depth (nvme0n3) 00:40:19.007 Could not set queue depth (nvme0n4) 00:40:19.265 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.265 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.265 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.265 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:19.265 fio-3.35 00:40:19.265 Starting 4 threads 00:40:20.662 00:40:20.662 job0: (groupid=0, jobs=1): err= 0: pid=1276078: Sun Dec 15 13:20:28 2024 00:40:20.662 read: IOPS=4927, BW=19.2MiB/s (20.2MB/s)(19.4MiB/1007msec) 00:40:20.662 slat (nsec): min=1681, max=12555k, avg=81236.85, stdev=707623.33 00:40:20.662 clat (usec): min=685, max=43265, avg=11148.38, stdev=5429.08 00:40:20.662 lat (usec): min=923, max=43270, avg=11229.61, stdev=5489.95 00:40:20.662 clat percentiles (usec): 00:40:20.662 | 1.00th=[ 2442], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 8979], 00:40:20.662 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10290], 00:40:20.662 | 70.00th=[11207], 80.00th=[11994], 90.00th=[16909], 95.00th=[19268], 00:40:20.662 | 99.00th=[40633], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:20.662 | 99.99th=[43254] 00:40:20.662 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:40:20.662 slat (usec): min=2, max=40949, avg=99.75, stdev=1118.47 00:40:20.662 clat (msec): min=2, max=121, avg=11.31, stdev= 7.04 00:40:20.662 lat (msec): min=2, max=121, avg=11.41, stdev= 7.24 00:40:20.662 clat percentiles (msec): 00:40:20.662 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:40:20.662 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:40:20.662 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 25], 00:40:20.662 | 99.00th=[ 40], 99.50th=[ 42], 99.90th=[ 84], 99.95th=[ 122], 00:40:20.662 | 99.99th=[ 122] 00:40:20.662 bw ( KiB/s): min=16384, max=24576, per=27.97%, avg=20480.00, stdev=5792.62, samples=2 00:40:20.662 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:40:20.662 lat (usec) : 750=0.01%, 1000=0.10% 00:40:20.662 lat (msec) : 2=0.28%, 4=2.38%, 10=61.24%, 20=30.56%, 50=5.36% 00:40:20.662 lat (msec) : 100=0.04%, 250=0.04% 00:40:20.662 cpu : usr=5.17%, sys=5.86%, ctx=305, majf=0, minf=1 00:40:20.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:20.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.662 issued rwts: total=4962,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.662 job1: (groupid=0, jobs=1): err= 0: pid=1276079: Sun Dec 15 13:20:28 2024 00:40:20.662 read: IOPS=4283, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1002msec) 00:40:20.662 slat (nsec): min=1485, max=14266k, avg=112215.22, stdev=764540.97 00:40:20.662 clat (usec): min=828, max=47578, avg=14014.77, stdev=8007.82 00:40:20.662 lat (usec): min=3109, max=47586, avg=14126.99, stdev=8057.02 00:40:20.662 clat percentiles (usec): 00:40:20.662 | 1.00th=[ 5473], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:40:20.662 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11994], 00:40:20.662 | 70.00th=[12518], 80.00th=[13435], 90.00th=[27919], 95.00th=[35390], 00:40:20.662 | 99.00th=[42730], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:40:20.662 | 99.99th=[47449] 00:40:20.662 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:40:20.662 slat (usec): min=2, max=25303, avg=106.91, stdev=829.78 00:40:20.662 clat (usec): min=5197, max=65428, avg=14482.93, stdev=8811.07 00:40:20.662 lat (usec): min=5207, max=65460, avg=14589.83, stdev=8892.18 00:40:20.662 clat percentiles (usec): 00:40:20.662 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[ 9896], 00:40:20.662 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[11863], 00:40:20.662 | 70.00th=[12387], 80.00th=[17433], 90.00th=[28967], 95.00th=[33424], 00:40:20.662 | 99.00th=[49546], 99.50th=[53740], 99.90th=[54264], 99.95th=[55837], 00:40:20.662 | 99.99th=[65274] 00:40:20.662 bw ( KiB/s): min=16384, max=20521, per=25.20%, avg=18452.50, stdev=2925.30, samples=2 00:40:20.662 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:40:20.662 lat (usec) : 1000=0.01% 00:40:20.662 lat (msec) : 4=0.36%, 10=22.45%, 20=61.84%, 50=14.98%, 100=0.36% 00:40:20.662 cpu : usr=2.90%, sys=5.89%, ctx=348, majf=0, minf=2 00:40:20.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:20.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.662 issued rwts: total=4292,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.662 job2: (groupid=0, jobs=1): err= 0: pid=1276080: Sun Dec 15 13:20:28 2024 00:40:20.662 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:40:20.662 slat (nsec): min=1504, max=16500k, avg=177210.09, stdev=1104812.32 00:40:20.662 clat (usec): min=892, max=114863, avg=18608.04, stdev=14707.81 00:40:20.662 lat (msec): min=4, max=114, avg=18.79, stdev=14.85 00:40:20.662 clat percentiles (msec): 00:40:20.662 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:40:20.662 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:40:20.662 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 36], 95.00th=[ 49], 00:40:20.662 | 99.00th=[ 94], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 115], 00:40:20.662 | 99.99th=[ 115] 00:40:20.662 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:40:20.662 slat (usec): min=2, max=11600, avg=144.58, stdev=712.06 00:40:20.662 clat (usec): min=1601, max=122414, avg=23061.52, stdev=20375.60 00:40:20.662 lat (usec): min=1614, max=122424, avg=23206.10, stdev=20481.16 00:40:20.662 clat percentiles (msec): 00:40:20.662 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:40:20.662 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 20], 00:40:20.662 | 70.00th=[ 23], 80.00th=[ 33], 90.00th=[ 46], 95.00th=[ 57], 00:40:20.662 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 123], 99.95th=[ 123], 00:40:20.662 | 99.99th=[ 123] 00:40:20.662 bw ( KiB/s): min=12056, max=12520, per=16.78%, avg=12288.00, stdev=328.10, samples=2 00:40:20.662 iops : min= 3014, max= 3130, avg=3072.00, stdev=82.02, samples=2 00:40:20.662 lat (usec) : 1000=0.02% 00:40:20.662 lat (msec) : 2=0.21%, 4=0.82%, 10=11.94%, 20=58.67%, 50=23.09% 00:40:20.662 lat (msec) : 100=3.61%, 250=1.66% 00:40:20.662 cpu : usr=2.79%, sys=4.18%, ctx=381, majf=0, minf=1 00:40:20.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:20.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.662 issued rwts: total=3027,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.662 job3: (groupid=0, jobs=1): err= 0: pid=1276081: Sun Dec 15 13:20:28 2024 00:40:20.662 read: IOPS=5431, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1003msec) 00:40:20.662 slat (nsec): min=1450, max=10994k, avg=90106.06, stdev=650782.27 00:40:20.662 clat (usec): min=496, max=21779, avg=11561.02, stdev=2619.49 00:40:20.662 lat (usec): min=3881, max=21785, avg=11651.12, stdev=2659.71 00:40:20.662 clat percentiles (usec): 00:40:20.662 | 1.00th=[ 5932], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[ 9765], 00:40:20.662 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:40:20.662 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15008], 95.00th=[17171], 00:40:20.662 | 99.00th=[19792], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:40:20.662 | 99.99th=[21890] 00:40:20.662 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:40:20.662 slat (usec): min=2, max=9038, avg=82.92, stdev=496.23 00:40:20.662 clat (usec): min=1595, max=25063, avg=11318.25, stdev=3051.87 00:40:20.662 lat (usec): min=1609, max=25072, avg=11401.17, stdev=3070.79 00:40:20.662 clat percentiles (usec): 00:40:20.662 | 1.00th=[ 6325], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 9896], 00:40:20.662 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:40:20.662 | 70.00th=[11600], 80.00th=[12125], 90.00th=[14484], 95.00th=[15926], 00:40:20.662 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:40:20.662 | 99.99th=[25035] 00:40:20.662 bw ( KiB/s): min=21432, max=23624, per=30.77%, avg=22528.00, stdev=1549.98, samples=2 00:40:20.662 iops : min= 5358, max= 5906, avg=5632.00, stdev=387.49, samples=2 00:40:20.662 lat (usec) : 500=0.01% 00:40:20.662 lat (msec) : 2=0.05%, 4=0.19%, 10=23.15%, 20=74.71%, 50=1.89% 00:40:20.662 cpu : usr=4.39%, sys=5.89%, ctx=560, majf=0, minf=1 00:40:20.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:20.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.662 issued rwts: total=5448,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.662 00:40:20.662 Run status group 0 (all jobs): 00:40:20.662 READ: bw=68.8MiB/s (72.1MB/s), 11.8MiB/s-21.2MiB/s (12.3MB/s-22.2MB/s), io=69.3MiB (72.6MB), run=1002-1007msec 00:40:20.662 WRITE: bw=71.5MiB/s (75.0MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=72.0MiB (75.5MB), run=1002-1007msec 00:40:20.662 00:40:20.662 Disk stats (read/write): 00:40:20.662 nvme0n1: ios=4190/4560, merge=0/0, ticks=41858/46981, in_queue=88839, util=100.00% 00:40:20.662 nvme0n2: ios=3584/3611, merge=0/0, ticks=20231/19369, in_queue=39600, util=86.40% 00:40:20.662 nvme0n3: ios=2618/2663, merge=0/0, ticks=46435/56823, in_queue=103258, util=98.13% 00:40:20.662 nvme0n4: ios=4629/4874, merge=0/0, ticks=40217/42411, in_queue=82628, util=98.01% 00:40:20.662 13:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:20.662 [global] 00:40:20.662 thread=1 00:40:20.662 invalidate=1 00:40:20.662 rw=randwrite 00:40:20.662 time_based=1 00:40:20.662 runtime=1 00:40:20.662 ioengine=libaio 00:40:20.663 direct=1 00:40:20.663 bs=4096 00:40:20.663 iodepth=128 00:40:20.663 norandommap=0 00:40:20.663 numjobs=1 00:40:20.663 00:40:20.663 verify_dump=1 00:40:20.663 verify_backlog=512 00:40:20.663 verify_state_save=0 00:40:20.663 do_verify=1 00:40:20.663 verify=crc32c-intel 00:40:20.663 [job0] 00:40:20.663 filename=/dev/nvme0n1 00:40:20.663 [job1] 00:40:20.663 filename=/dev/nvme0n2 00:40:20.663 [job2] 00:40:20.663 filename=/dev/nvme0n3 00:40:20.663 [job3] 00:40:20.663 filename=/dev/nvme0n4 00:40:20.663 Could not set queue depth (nvme0n1) 00:40:20.663 Could not set queue depth (nvme0n2) 00:40:20.663 Could not set queue depth (nvme0n3) 00:40:20.663 Could not set queue depth (nvme0n4) 00:40:20.921 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.921 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.921 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.921 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:20.921 fio-3.35 00:40:20.921 Starting 4 threads 00:40:22.292 00:40:22.292 job0: (groupid=0, jobs=1): err= 0: pid=1276445: Sun Dec 15 13:20:29 2024 00:40:22.292 read: IOPS=1963, BW=7854KiB/s (8042kB/s)(7940KiB/1011msec) 00:40:22.292 slat (usec): min=2, max=27185, avg=292.28, stdev=1826.24 00:40:22.292 clat (usec): min=973, max=111365, avg=37089.28, stdev=29701.18 00:40:22.292 lat (msec): min=11, max=111, avg=37.38, stdev=29.87 00:40:22.292 clat percentiles (msec): 00:40:22.292 | 1.00th=[ 12], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:40:22.292 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 30], 00:40:22.292 | 70.00th=[ 46], 80.00th=[ 74], 90.00th=[ 90], 95.00th=[ 99], 00:40:22.292 | 99.00th=[ 112], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:40:22.292 | 99.99th=[ 112] 00:40:22.292 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:40:22.292 slat (usec): min=5, max=21538, avg=201.15, stdev=1370.70 00:40:22.292 clat (usec): min=10827, max=83183, avg=25637.16, stdev=17977.60 00:40:22.292 lat (usec): min=10843, max=83194, avg=25838.30, stdev=18065.84 00:40:22.292 clat percentiles (usec): 00:40:22.292 | 1.00th=[11469], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:40:22.292 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15401], 00:40:22.292 | 70.00th=[23725], 80.00th=[41157], 90.00th=[58459], 95.00th=[64750], 00:40:22.292 | 99.00th=[82314], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:40:22.292 | 99.99th=[83362] 00:40:22.292 bw ( KiB/s): min= 8192, max= 8192, per=11.26%, avg=8192.00, stdev= 0.00, samples=2 00:40:22.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:40:22.292 lat (usec) : 1000=0.02% 00:40:22.292 lat (msec) : 20=61.02%, 50=17.73%, 100=19.14%, 250=2.08% 00:40:22.292 cpu : usr=1.88%, sys=3.27%, ctx=144, majf=0, minf=1 00:40:22.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:40:22.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.292 issued rwts: total=1985,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.292 job1: (groupid=0, jobs=1): err= 0: pid=1276446: Sun Dec 15 13:20:29 2024 00:40:22.292 read: IOPS=4502, BW=17.6MiB/s (18.4MB/s)(17.8MiB/1011msec) 00:40:22.292 slat (nsec): min=1340, max=12057k, avg=99594.32, stdev=766457.35 00:40:22.292 clat (usec): min=4060, max=40138, avg=12841.44, stdev=4547.87 00:40:22.292 lat (usec): min=5496, max=40140, avg=12941.03, stdev=4614.89 00:40:22.292 clat percentiles (usec): 00:40:22.292 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[10028], 00:40:22.292 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:40:22.292 | 70.00th=[12780], 80.00th=[14353], 90.00th=[17957], 95.00th=[21890], 00:40:22.292 | 99.00th=[33424], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:40:22.292 | 99.99th=[40109] 00:40:22.292 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:40:22.292 slat (usec): min=2, max=10144, avg=113.36, stdev=693.83 00:40:22.292 clat (usec): min=1484, max=40138, avg=15133.42, stdev=7985.81 00:40:22.292 lat (usec): min=1499, max=40142, avg=15246.78, stdev=8049.07 00:40:22.292 clat percentiles (usec): 00:40:22.292 | 1.00th=[ 5997], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 9241], 00:40:22.292 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11863], 00:40:22.293 | 70.00th=[16188], 80.00th=[26608], 90.00th=[29230], 95.00th=[29754], 00:40:22.293 | 99.00th=[30540], 99.50th=[30540], 99.90th=[37487], 99.95th=[37487], 00:40:22.293 | 99.99th=[40109] 00:40:22.293 bw ( KiB/s): min=16384, max=20480, per=25.33%, avg=18432.00, stdev=2896.31, samples=2 00:40:22.293 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:40:22.293 lat (msec) : 2=0.02%, 4=0.07%, 10=26.98%, 20=57.09%, 50=15.85% 00:40:22.293 cpu : usr=3.56%, sys=5.94%, ctx=327, majf=0, minf=1 00:40:22.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:22.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.293 issued rwts: total=4552,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.293 job2: (groupid=0, jobs=1): err= 0: pid=1276449: Sun Dec 15 13:20:29 2024 00:40:22.293 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:40:22.293 slat (nsec): min=1390, max=10268k, avg=87707.15, stdev=740366.41 00:40:22.293 clat (usec): min=3731, max=21886, avg=11635.41, stdev=2871.16 00:40:22.293 lat (usec): min=3737, max=21899, avg=11723.12, stdev=2929.34 00:40:22.293 clat percentiles (usec): 00:40:22.293 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9896], 00:40:22.293 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[11076], 00:40:22.293 | 70.00th=[11469], 80.00th=[14353], 90.00th=[16188], 95.00th=[17957], 00:40:22.293 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21365], 99.95th=[21627], 00:40:22.293 | 99.99th=[21890] 00:40:22.293 write: IOPS=5840, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1007msec); 0 zone resets 00:40:22.293 slat (usec): min=2, max=9822, avg=80.56, stdev=659.67 00:40:22.293 clat (usec): min=1069, max=21111, avg=10585.02, stdev=2649.23 00:40:22.293 lat (usec): min=1092, max=21116, avg=10665.58, stdev=2683.07 00:40:22.293 clat percentiles (usec): 00:40:22.293 | 1.00th=[ 5014], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7832], 00:40:22.293 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:40:22.293 | 70.00th=[11469], 80.00th=[11863], 90.00th=[14746], 95.00th=[15664], 00:40:22.293 | 99.00th=[16319], 99.50th=[17957], 99.90th=[21103], 99.95th=[21103], 00:40:22.293 | 99.99th=[21103] 00:40:22.293 bw ( KiB/s): min=21448, max=24576, per=31.63%, avg=23012.00, stdev=2211.83, samples=2 00:40:22.293 iops : min= 5362, max= 6144, avg=5753.00, stdev=552.96, samples=2 00:40:22.293 lat (msec) : 2=0.09%, 4=0.29%, 10=28.54%, 20=70.77%, 50=0.31% 00:40:22.293 cpu : usr=4.67%, sys=7.06%, ctx=316, majf=0, minf=2 00:40:22.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:22.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.293 issued rwts: total=5632,5881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.293 job3: (groupid=0, jobs=1): err= 0: pid=1276450: Sun Dec 15 13:20:29 2024 00:40:22.293 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:40:22.293 slat (nsec): min=1388, max=10273k, avg=88220.17, stdev=762338.46 00:40:22.293 clat (usec): min=4150, max=21709, avg=11606.24, stdev=2760.25 00:40:22.293 lat (usec): min=4155, max=26335, avg=11694.46, stdev=2842.16 00:40:22.293 clat percentiles (usec): 00:40:22.293 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10028], 00:40:22.293 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:40:22.293 | 70.00th=[11207], 80.00th=[12518], 90.00th=[16581], 95.00th=[18482], 00:40:22.293 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:40:22.293 | 99.99th=[21627] 00:40:22.293 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(22.9MiB/1011msec); 0 zone resets 00:40:22.293 slat (usec): min=2, max=9477, avg=79.46, stdev=640.61 00:40:22.293 clat (usec): min=2695, max=20786, avg=10696.90, stdev=2356.98 00:40:22.293 lat (usec): min=2702, max=20799, avg=10776.36, stdev=2412.13 00:40:22.293 clat percentiles (usec): 00:40:22.293 | 1.00th=[ 4817], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 9241], 00:40:22.293 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:40:22.293 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12518], 95.00th=[15401], 00:40:22.293 | 99.00th=[17957], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:40:22.293 | 99.99th=[20841] 00:40:22.293 bw ( KiB/s): min=21248, max=24568, per=31.48%, avg=22908.00, stdev=2347.59, samples=2 00:40:22.293 iops : min= 5312, max= 6142, avg=5727.00, stdev=586.90, samples=2 00:40:22.293 lat (msec) : 4=0.34%, 10=22.79%, 20=76.27%, 50=0.60% 00:40:22.293 cpu : usr=5.05%, sys=6.93%, ctx=320, majf=0, minf=1 00:40:22.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:22.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:22.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:22.293 issued rwts: total=5632,5854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:22.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:22.293 00:40:22.293 Run status group 0 (all jobs): 00:40:22.293 READ: bw=68.8MiB/s (72.1MB/s), 7854KiB/s-21.8MiB/s (8042kB/s-22.9MB/s), io=69.5MiB (72.9MB), run=1007-1011msec 00:40:22.293 WRITE: bw=71.1MiB/s (74.5MB/s), 8103KiB/s-22.8MiB/s (8297kB/s-23.9MB/s), io=71.8MiB (75.3MB), run=1007-1011msec 00:40:22.293 00:40:22.293 Disk stats (read/write): 00:40:22.293 nvme0n1: ios=1490/1536, merge=0/0, ticks=15602/11022, in_queue=26624, util=86.67% 00:40:22.293 nvme0n2: ios=4028/4096, merge=0/0, ticks=49354/55239, in_queue=104593, util=86.78% 00:40:22.293 nvme0n3: ios=4608/5103, merge=0/0, ticks=51670/52142, in_queue=103812, util=88.95% 00:40:22.293 nvme0n4: ios=4625/5014, merge=0/0, ticks=52601/51770, in_queue=104371, util=97.79% 00:40:22.293 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:22.293 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1276674 00:40:22.293 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:22.293 13:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:22.293 [global] 00:40:22.293 thread=1 00:40:22.293 invalidate=1 00:40:22.293 rw=read 00:40:22.293 time_based=1 00:40:22.293 runtime=10 00:40:22.293 ioengine=libaio 00:40:22.293 direct=1 00:40:22.293 bs=4096 00:40:22.293 iodepth=1 00:40:22.293 norandommap=1 00:40:22.293 numjobs=1 00:40:22.293 00:40:22.293 [job0] 00:40:22.293 filename=/dev/nvme0n1 00:40:22.293 [job1] 00:40:22.293 filename=/dev/nvme0n2 00:40:22.293 [job2] 00:40:22.293 filename=/dev/nvme0n3 00:40:22.293 [job3] 00:40:22.293 filename=/dev/nvme0n4 00:40:22.293 Could not set queue depth (nvme0n1) 00:40:22.293 Could not set queue depth (nvme0n2) 00:40:22.293 Could not set queue depth (nvme0n3) 00:40:22.293 Could not set queue depth (nvme0n4) 00:40:22.293 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.293 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.293 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.293 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:22.293 fio-3.35 00:40:22.293 Starting 4 threads 00:40:25.568 13:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:25.568 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:25.568 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39411712, buflen=4096 00:40:25.568 fio: pid=1276813, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.568 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.568 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:25.568 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4489216, buflen=4096 00:40:25.568 fio: pid=1276812, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.568 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52056064, buflen=4096 00:40:25.568 fio: pid=1276810, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.568 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.568 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:25.825 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54087680, buflen=4096 00:40:25.826 fio: pid=1276811, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:25.826 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.826 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:25.826 00:40:25.826 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1276810: Sun Dec 15 13:20:33 2024 00:40:25.826 read: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(49.6MiB/3107msec) 00:40:25.826 slat (usec): min=5, max=14003, avg=10.80, stdev=200.39 00:40:25.826 clat (usec): min=169, max=2785, avg=231.75, stdev=38.07 00:40:25.826 lat (usec): min=176, max=14376, avg=242.55, stdev=206.90 00:40:25.826 clat percentiles (usec): 00:40:25.826 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:40:25.826 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:40:25.826 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 293], 00:40:25.826 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 478], 99.95th=[ 486], 00:40:25.826 | 99.99th=[ 506] 00:40:25.826 bw ( KiB/s): min=15880, max=17420, per=37.50%, avg=16392.67, stdev=638.78, samples=6 00:40:25.826 iops : min= 3970, max= 4355, avg=4098.17, stdev=159.70, samples=6 00:40:25.826 lat (usec) : 250=79.27%, 500=20.70%, 750=0.02% 00:40:25.826 lat (msec) : 4=0.01% 00:40:25.826 cpu : usr=0.90%, sys=3.93%, ctx=12715, majf=0, minf=1 00:40:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 issued rwts: total=12710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.826 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1276811: Sun Dec 15 13:20:33 2024 00:40:25.826 read: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(51.6MiB/3352msec) 00:40:25.826 slat (usec): min=6, max=24036, avg=14.88, stdev=315.71 00:40:25.826 clat (usec): min=159, max=1662, avg=234.79, stdev=35.98 00:40:25.826 lat (usec): min=181, max=24319, avg=249.67, stdev=318.77 00:40:25.826 clat percentiles (usec): 00:40:25.826 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:40:25.826 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:40:25.826 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 302], 00:40:25.826 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 461], 99.95th=[ 486], 00:40:25.826 | 99.99th=[ 1582] 00:40:25.826 bw ( KiB/s): min=15200, max=16496, per=36.29%, avg=15862.33, stdev=457.10, samples=6 00:40:25.826 iops : min= 3800, max= 4124, avg=3965.50, stdev=114.35, samples=6 00:40:25.826 lat (usec) : 250=83.43%, 500=16.52%, 750=0.02% 00:40:25.826 lat (msec) : 2=0.02% 00:40:25.826 cpu : usr=2.33%, sys=6.06%, ctx=13212, majf=0, minf=2 00:40:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 issued rwts: total=13206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.826 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1276812: Sun Dec 15 13:20:33 2024 00:40:25.826 read: IOPS=375, BW=1501KiB/s (1537kB/s)(4384KiB/2921msec) 00:40:25.826 slat (nsec): min=7015, max=34938, avg=8957.81, stdev=2597.35 00:40:25.826 clat (usec): min=208, max=41170, avg=2634.80, stdev=9553.03 00:40:25.826 lat (usec): min=216, max=41185, avg=2643.75, stdev=9554.72 00:40:25.826 clat percentiles (usec): 00:40:25.826 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:40:25.826 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 255], 00:40:25.826 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[41157], 00:40:25.826 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:25.826 | 99.99th=[41157] 00:40:25.826 bw ( KiB/s): min= 96, max= 8288, per=3.97%, avg=1737.60, stdev=3661.79, samples=5 00:40:25.826 iops : min= 24, max= 2072, avg=434.40, stdev=915.45, samples=5 00:40:25.826 lat (usec) : 250=54.79%, 500=39.29% 00:40:25.826 lat (msec) : 50=5.83% 00:40:25.826 cpu : usr=0.48%, sys=0.34%, ctx=1097, majf=0, minf=2 00:40:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 issued rwts: total=1097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.826 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1276813: Sun Dec 15 13:20:33 2024 00:40:25.826 read: IOPS=3554, BW=13.9MiB/s (14.6MB/s)(37.6MiB/2707msec) 00:40:25.826 slat (nsec): min=6857, max=50610, avg=8199.25, stdev=1426.43 00:40:25.826 clat (usec): min=224, max=3634, avg=269.05, stdev=47.55 00:40:25.826 lat (usec): min=232, max=3641, avg=277.25, stdev=47.61 00:40:25.826 clat percentiles (usec): 00:40:25.826 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:40:25.826 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:40:25.826 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:40:25.826 | 99.00th=[ 318], 99.50th=[ 474], 99.90th=[ 506], 99.95th=[ 1029], 00:40:25.826 | 99.99th=[ 3621] 00:40:25.826 bw ( KiB/s): min=14000, max=14592, per=32.69%, avg=14292.80, stdev=237.22, samples=5 00:40:25.826 iops : min= 3500, max= 3648, avg=3573.20, stdev=59.31, samples=5 00:40:25.826 lat (usec) : 250=15.37%, 500=84.50%, 750=0.05%, 1000=0.02% 00:40:25.826 lat (msec) : 2=0.04%, 4=0.01% 00:40:25.826 cpu : usr=2.29%, sys=5.43%, ctx=9624, majf=0, minf=2 00:40:25.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.826 issued rwts: total=9623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:25.826 00:40:25.826 Run status group 0 (all jobs): 00:40:25.826 READ: bw=42.7MiB/s (44.8MB/s), 1501KiB/s-16.0MiB/s (1537kB/s-16.8MB/s), io=143MiB (150MB), run=2707-3352msec 00:40:25.826 00:40:25.826 Disk stats (read/write): 00:40:25.826 nvme0n1: ios=12692/0, merge=0/0, ticks=3909/0, in_queue=3909, util=97.94% 00:40:25.826 nvme0n2: ios=13206/0, merge=0/0, ticks=2952/0, in_queue=2952, util=93.62% 00:40:25.826 nvme0n3: ios=1094/0, merge=0/0, ticks=2800/0, in_queue=2800, util=96.33% 00:40:25.826 nvme0n4: ios=9250/0, merge=0/0, ticks=2376/0, in_queue=2376, util=96.39% 00:40:26.083 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.083 13:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:26.340 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.340 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:26.597 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.597 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1276674 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:26.854 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:27.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:27.112 nvmf hotplug test: fio failed as expected 00:40:27.112 13:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.377 rmmod nvme_tcp 00:40:27.377 rmmod nvme_fabrics 00:40:27.377 rmmod nvme_keyring 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1274241 ']' 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1274241 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1274241 ']' 00:40:27.377 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1274241 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274241 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274241' 00:40:27.378 killing process with pid 1274241 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1274241 00:40:27.378 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1274241 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:27.638 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:27.639 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.639 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.639 13:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.543 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.801 00:40:29.801 real 0m25.835s 00:40:29.801 user 1m31.325s 00:40:29.801 sys 0m11.655s 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:29.801 ************************************ 00:40:29.801 END TEST nvmf_fio_target 00:40:29.801 ************************************ 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.801 ************************************ 00:40:29.801 START TEST nvmf_bdevio 00:40:29.801 ************************************ 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.801 * Looking for test storage... 00:40:29.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.801 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.061 --rc genhtml_branch_coverage=1 00:40:30.061 --rc genhtml_function_coverage=1 00:40:30.061 --rc genhtml_legend=1 00:40:30.061 --rc geninfo_all_blocks=1 00:40:30.061 --rc geninfo_unexecuted_blocks=1 00:40:30.061 00:40:30.061 ' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.061 --rc genhtml_branch_coverage=1 00:40:30.061 --rc genhtml_function_coverage=1 00:40:30.061 --rc genhtml_legend=1 00:40:30.061 --rc geninfo_all_blocks=1 00:40:30.061 --rc geninfo_unexecuted_blocks=1 00:40:30.061 00:40:30.061 ' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.061 --rc genhtml_branch_coverage=1 00:40:30.061 --rc genhtml_function_coverage=1 00:40:30.061 --rc genhtml_legend=1 00:40:30.061 --rc geninfo_all_blocks=1 00:40:30.061 --rc geninfo_unexecuted_blocks=1 00:40:30.061 00:40:30.061 ' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.061 --rc genhtml_branch_coverage=1 00:40:30.061 --rc genhtml_function_coverage=1 00:40:30.061 --rc genhtml_legend=1 00:40:30.061 --rc geninfo_all_blocks=1 00:40:30.061 --rc geninfo_unexecuted_blocks=1 00:40:30.061 00:40:30.061 ' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.061 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.062 13:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:36.632 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:36.632 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.632 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:36.633 Found net devices under 0000:af:00.0: cvl_0_0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:36.633 Found net devices under 0000:af:00.1: cvl_0_1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:36.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:36.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:40:36.633 00:40:36.633 --- 10.0.0.2 ping statistics --- 00:40:36.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.633 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:36.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:36.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:40:36.633 00:40:36.633 --- 10.0.0.1 ping statistics --- 00:40:36.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:36.633 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1281004 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1281004 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1281004 ']' 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.633 [2024-12-15 13:20:43.740152] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:36.633 [2024-12-15 13:20:43.741093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:36.633 [2024-12-15 13:20:43.741145] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:36.633 [2024-12-15 13:20:43.819598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:36.633 [2024-12-15 13:20:43.841768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:36.633 [2024-12-15 13:20:43.841806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:36.633 [2024-12-15 13:20:43.841813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:36.633 [2024-12-15 13:20:43.841818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:36.633 [2024-12-15 13:20:43.841823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:36.633 [2024-12-15 13:20:43.843365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:40:36.633 [2024-12-15 13:20:43.843473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:40:36.633 [2024-12-15 13:20:43.843585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:36.633 [2024-12-15 13:20:43.843585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:40:36.633 [2024-12-15 13:20:43.906573] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:36.633 [2024-12-15 13:20:43.907670] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:36.633 [2024-12-15 13:20:43.907752] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:36.633 [2024-12-15 13:20:43.908288] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:36.633 [2024-12-15 13:20:43.908335] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.633 13:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.633 [2024-12-15 13:20:43.988339] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.634 Malloc0 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:36.634 [2024-12-15 13:20:44.072668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:36.634 { 00:40:36.634 "params": { 00:40:36.634 "name": "Nvme$subsystem", 00:40:36.634 "trtype": "$TEST_TRANSPORT", 00:40:36.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:36.634 "adrfam": "ipv4", 00:40:36.634 "trsvcid": "$NVMF_PORT", 00:40:36.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:36.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:36.634 "hdgst": ${hdgst:-false}, 00:40:36.634 "ddgst": ${ddgst:-false} 00:40:36.634 }, 00:40:36.634 "method": "bdev_nvme_attach_controller" 00:40:36.634 } 00:40:36.634 EOF 00:40:36.634 )") 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:36.634 13:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:36.634 "params": { 00:40:36.634 "name": "Nvme1", 00:40:36.634 "trtype": "tcp", 00:40:36.634 "traddr": "10.0.0.2", 00:40:36.634 "adrfam": "ipv4", 00:40:36.634 "trsvcid": "4420", 00:40:36.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:36.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:36.634 "hdgst": false, 00:40:36.634 "ddgst": false 00:40:36.634 }, 00:40:36.634 "method": "bdev_nvme_attach_controller" 00:40:36.634 }' 00:40:36.634 [2024-12-15 13:20:44.124786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:36.634 [2024-12-15 13:20:44.124841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281213 ] 00:40:36.634 [2024-12-15 13:20:44.202081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:36.634 [2024-12-15 13:20:44.227234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.634 [2024-12-15 13:20:44.227265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.634 [2024-12-15 13:20:44.227266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:36.634 I/O targets: 00:40:36.634 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:36.634 00:40:36.634 00:40:36.634 CUnit - A unit testing framework for C - Version 2.1-3 00:40:36.634 http://cunit.sourceforge.net/ 00:40:36.634 00:40:36.634 00:40:36.634 Suite: bdevio tests on: Nvme1n1 00:40:36.634 Test: blockdev write read block ...passed 00:40:36.634 Test: blockdev write zeroes read block ...passed 00:40:36.634 Test: blockdev write zeroes read no split ...passed 00:40:36.891 Test: blockdev write zeroes read split ...passed 00:40:36.891 Test: blockdev write zeroes read split partial ...passed 00:40:36.891 Test: blockdev reset ...[2024-12-15 13:20:44.608306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:36.891 [2024-12-15 13:20:44.608365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1468340 (9): Bad file descriptor 00:40:36.891 [2024-12-15 13:20:44.653945] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:36.891 passed 00:40:36.891 Test: blockdev write read 8 blocks ...passed 00:40:36.891 Test: blockdev write read size > 128k ...passed 00:40:36.891 Test: blockdev write read invalid size ...passed 00:40:36.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:36.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:36.891 Test: blockdev write read max offset ...passed 00:40:37.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:37.148 Test: blockdev writev readv 8 blocks ...passed 00:40:37.148 Test: blockdev writev readv 30 x 1block ...passed 00:40:37.148 Test: blockdev writev readv block ...passed 00:40:37.148 Test: blockdev writev readv size > 128k ...passed 00:40:37.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:37.148 Test: blockdev comparev and writev ...[2024-12-15 13:20:44.906151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.906201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.906496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.906518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.906818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.906845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.906856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.907138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.907152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:37.148 [2024-12-15 13:20:44.907164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:37.148 [2024-12-15 13:20:44.907172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:37.148 passed 00:40:37.148 Test: blockdev nvme passthru rw ...passed 00:40:37.149 Test: blockdev nvme passthru vendor specific ...[2024-12-15 13:20:44.989235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.149 [2024-12-15 13:20:44.989254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:37.149 [2024-12-15 13:20:44.989364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.149 [2024-12-15 13:20:44.989375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:37.149 [2024-12-15 13:20:44.989491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.149 [2024-12-15 13:20:44.989502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:37.149 [2024-12-15 13:20:44.989620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:37.149 [2024-12-15 13:20:44.989630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:37.149 passed 00:40:37.149 Test: blockdev nvme admin passthru ...passed 00:40:37.149 Test: blockdev copy ...passed 00:40:37.149 00:40:37.149 Run Summary: Type Total Ran Passed Failed Inactive 00:40:37.149 suites 1 1 n/a 0 0 00:40:37.149 tests 23 23 23 0 0 00:40:37.149 asserts 152 152 152 0 n/a 00:40:37.149 00:40:37.149 Elapsed time = 1.277 seconds 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:37.406 rmmod nvme_tcp 00:40:37.406 rmmod nvme_fabrics 00:40:37.406 rmmod nvme_keyring 00:40:37.406 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1281004 ']' 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1281004 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1281004 ']' 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1281004 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281004 00:40:37.407 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281004' 00:40:37.665 killing process with pid 1281004 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1281004 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1281004 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.665 13:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:40.201 00:40:40.201 real 0m10.021s 00:40:40.201 user 0m9.024s 00:40:40.201 sys 0m5.112s 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:40.201 ************************************ 00:40:40.201 END TEST nvmf_bdevio 00:40:40.201 ************************************ 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:40.201 00:40:40.201 real 4m30.046s 00:40:40.201 user 9m3.316s 00:40:40.201 sys 1m49.532s 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:40.201 13:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:40.201 ************************************ 00:40:40.201 END TEST nvmf_target_core_interrupt_mode 00:40:40.201 ************************************ 00:40:40.201 13:20:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:40.201 13:20:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:40.201 13:20:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.202 13:20:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.202 ************************************ 00:40:40.202 START TEST nvmf_interrupt 00:40:40.202 ************************************ 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:40.202 * Looking for test storage... 00:40:40.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.202 --rc genhtml_branch_coverage=1 00:40:40.202 --rc genhtml_function_coverage=1 00:40:40.202 --rc genhtml_legend=1 00:40:40.202 --rc geninfo_all_blocks=1 00:40:40.202 --rc geninfo_unexecuted_blocks=1 00:40:40.202 00:40:40.202 ' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.202 --rc genhtml_branch_coverage=1 00:40:40.202 --rc genhtml_function_coverage=1 00:40:40.202 --rc genhtml_legend=1 00:40:40.202 --rc geninfo_all_blocks=1 00:40:40.202 --rc geninfo_unexecuted_blocks=1 00:40:40.202 00:40:40.202 ' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.202 --rc genhtml_branch_coverage=1 00:40:40.202 --rc genhtml_function_coverage=1 00:40:40.202 --rc genhtml_legend=1 00:40:40.202 --rc geninfo_all_blocks=1 00:40:40.202 --rc geninfo_unexecuted_blocks=1 00:40:40.202 00:40:40.202 ' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:40.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.202 --rc genhtml_branch_coverage=1 00:40:40.202 --rc genhtml_function_coverage=1 00:40:40.202 --rc genhtml_legend=1 00:40:40.202 --rc geninfo_all_blocks=1 00:40:40.202 --rc geninfo_unexecuted_blocks=1 00:40:40.202 00:40:40.202 ' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:40.202 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:40.203 13:20:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:40.203 13:20:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:46.775 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:46.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:46.776 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:46.776 Found net devices under 0000:af:00.0: cvl_0_0 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:46.776 Found net devices under 0000:af:00.1: cvl_0_1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:46.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:46.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:40:46.776 00:40:46.776 --- 10.0.0.2 ping statistics --- 00:40:46.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.776 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:46.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:46.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:40:46.776 00:40:46.776 --- 10.0.0.1 ping statistics --- 00:40:46.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:46.776 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1284704 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1284704 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1284704 ']' 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:46.776 13:20:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.776 [2024-12-15 13:20:53.815107] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:46.776 [2024-12-15 13:20:53.816004] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:46.776 [2024-12-15 13:20:53.816037] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:46.776 [2024-12-15 13:20:53.898773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:46.776 [2024-12-15 13:20:53.920606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:46.776 [2024-12-15 13:20:53.920641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:46.776 [2024-12-15 13:20:53.920649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:46.776 [2024-12-15 13:20:53.920654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:46.776 [2024-12-15 13:20:53.920659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:46.776 [2024-12-15 13:20:53.921738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.776 [2024-12-15 13:20:53.921738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.776 [2024-12-15 13:20:53.985573] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:46.776 [2024-12-15 13:20:53.986142] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:46.776 [2024-12-15 13:20:53.986330] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:46.776 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:46.777 5000+0 records in 00:40:46.777 5000+0 records out 00:40:46.777 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0173949 s, 589 MB/s 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.777 AIO0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.777 [2024-12-15 13:20:54.126436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:46.777 [2024-12-15 13:20:54.166820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1284704 0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 0 idle 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284704 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284704 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1284704 1 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 1 idle 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284716 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284716 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1284956 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1284704 0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1284704 0 busy 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:46.777 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284704 root 20 0 128.2g 46848 33792 R 73.3 0.1 0:00.34 reactor_0' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284704 root 20 0 128.2g 46848 33792 R 73.3 0.1 0:00.34 reactor_0 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1284704 1 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1284704 1 busy 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284716 root 20 0 128.2g 46848 33792 R 87.5 0.1 0:00.23 reactor_1' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284716 root 20 0 128.2g 46848 33792 R 87.5 0.1 0:00.23 reactor_1 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:47.035 13:20:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1284956 00:40:57.107 Initializing NVMe Controllers 00:40:57.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:57.107 Controller IO queue size 256, less than required. 00:40:57.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:57.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:57.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:57.107 Initialization complete. Launching workers. 00:40:57.107 ======================================================== 00:40:57.107 Latency(us) 00:40:57.107 Device Information : IOPS MiB/s Average min max 00:40:57.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16388.00 64.02 15628.73 3235.94 33579.03 00:40:57.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16593.40 64.82 15432.14 7541.88 55721.94 00:40:57.107 ======================================================== 00:40:57.107 Total : 32981.40 128.83 15529.82 3235.94 55721.94 00:40:57.107 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1284704 0 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 0 idle 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.107 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284704 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0' 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284704 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1284704 1 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 1 idle 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:57.108 13:21:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284716 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284716 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:57.367 13:21:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:57.933 13:21:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:57.933 13:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:57.933 13:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:57.933 13:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:57.933 13:21:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1284704 0 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 0 idle 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:40:59.838 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284704 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284704 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1284704 1 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1284704 1 idle 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1284704 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1284704 -w 256 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1284716 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1284716 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:00.097 13:21:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:00.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:00.356 rmmod nvme_tcp 00:41:00.356 rmmod nvme_fabrics 00:41:00.356 rmmod nvme_keyring 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1284704 ']' 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1284704 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1284704 ']' 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1284704 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:00.356 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1284704 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1284704' 00:41:00.616 killing process with pid 1284704 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1284704 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1284704 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:00.616 13:21:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.153 13:21:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.153 00:41:03.153 real 0m22.867s 00:41:03.153 user 0m39.742s 00:41:03.153 sys 0m8.299s 00:41:03.153 13:21:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.153 13:21:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:03.153 ************************************ 00:41:03.153 END TEST nvmf_interrupt 00:41:03.153 ************************************ 00:41:03.153 00:41:03.153 real 35m17.120s 00:41:03.153 user 85m44.001s 00:41:03.153 sys 10m28.199s 00:41:03.153 13:21:10 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.153 13:21:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:03.153 ************************************ 00:41:03.153 END TEST nvmf_tcp 00:41:03.153 ************************************ 00:41:03.153 13:21:10 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:03.153 13:21:10 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:03.153 13:21:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:03.153 13:21:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.153 13:21:10 -- common/autotest_common.sh@10 -- # set +x 00:41:03.153 ************************************ 00:41:03.153 START TEST spdkcli_nvmf_tcp 00:41:03.153 ************************************ 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:03.153 * Looking for test storage... 00:41:03.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:03.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.153 --rc genhtml_branch_coverage=1 00:41:03.153 --rc genhtml_function_coverage=1 00:41:03.153 --rc genhtml_legend=1 00:41:03.153 --rc geninfo_all_blocks=1 00:41:03.153 --rc geninfo_unexecuted_blocks=1 00:41:03.153 00:41:03.153 ' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:03.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.153 --rc genhtml_branch_coverage=1 00:41:03.153 --rc genhtml_function_coverage=1 00:41:03.153 --rc genhtml_legend=1 00:41:03.153 --rc geninfo_all_blocks=1 00:41:03.153 --rc geninfo_unexecuted_blocks=1 00:41:03.153 00:41:03.153 ' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:03.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.153 --rc genhtml_branch_coverage=1 00:41:03.153 --rc genhtml_function_coverage=1 00:41:03.153 --rc genhtml_legend=1 00:41:03.153 --rc geninfo_all_blocks=1 00:41:03.153 --rc geninfo_unexecuted_blocks=1 00:41:03.153 00:41:03.153 ' 00:41:03.153 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:03.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.153 --rc genhtml_branch_coverage=1 00:41:03.153 --rc genhtml_function_coverage=1 00:41:03.153 --rc genhtml_legend=1 00:41:03.153 --rc geninfo_all_blocks=1 00:41:03.153 --rc geninfo_unexecuted_blocks=1 00:41:03.153 00:41:03.153 ' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:03.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1288103 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1288103 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1288103 ']' 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:03.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:03.154 13:21:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:03.154 [2024-12-15 13:21:10.909894] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:03.154 [2024-12-15 13:21:10.909943] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288103 ] 00:41:03.154 [2024-12-15 13:21:10.982639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:03.154 [2024-12-15 13:21:11.006673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.154 [2024-12-15 13:21:11.006677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:03.413 13:21:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:03.413 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:03.413 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:03.413 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:03.413 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:03.413 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:03.413 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:03.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:03.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:03.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:03.413 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:03.413 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:03.413 ' 00:41:05.945 [2024-12-15 13:21:13.831400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.322 [2024-12-15 13:21:15.175865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:09.853 [2024-12-15 13:21:17.659559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:12.385 [2024-12-15 13:21:19.838332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:13.760 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:13.760 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:13.760 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:13.760 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:13.760 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:13.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:13.760 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:13.760 13:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.327 13:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:14.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:14.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:14.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:14.327 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:14.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:14.328 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:14.328 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:14.328 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:14.328 ' 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:20.891 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:20.891 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:20.891 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1288103 ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288103' 00:41:20.891 killing process with pid 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1288103 ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1288103 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1288103 ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1288103 00:41:20.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1288103) - No such process 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1288103 is not found' 00:41:20.891 Process with pid 1288103 is not found 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:20.891 00:41:20.891 real 0m17.327s 00:41:20.891 user 0m38.272s 00:41:20.891 sys 0m0.776s 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.891 13:21:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:20.891 ************************************ 00:41:20.891 END TEST spdkcli_nvmf_tcp 00:41:20.891 ************************************ 00:41:20.891 13:21:28 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:20.891 13:21:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:20.891 13:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.891 13:21:28 -- common/autotest_common.sh@10 -- # set +x 00:41:20.891 ************************************ 00:41:20.891 START TEST nvmf_identify_passthru 00:41:20.891 ************************************ 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:20.891 * Looking for test storage... 00:41:20.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.891 13:21:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:20.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.891 --rc genhtml_branch_coverage=1 00:41:20.891 --rc genhtml_function_coverage=1 00:41:20.891 --rc genhtml_legend=1 00:41:20.891 --rc geninfo_all_blocks=1 00:41:20.891 --rc geninfo_unexecuted_blocks=1 00:41:20.891 00:41:20.891 ' 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:20.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.891 --rc genhtml_branch_coverage=1 00:41:20.891 --rc genhtml_function_coverage=1 00:41:20.891 --rc genhtml_legend=1 00:41:20.891 --rc geninfo_all_blocks=1 00:41:20.891 --rc geninfo_unexecuted_blocks=1 00:41:20.891 00:41:20.891 ' 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:20.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.891 --rc genhtml_branch_coverage=1 00:41:20.891 --rc genhtml_function_coverage=1 00:41:20.891 --rc genhtml_legend=1 00:41:20.891 --rc geninfo_all_blocks=1 00:41:20.891 --rc geninfo_unexecuted_blocks=1 00:41:20.891 00:41:20.891 ' 00:41:20.891 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:20.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.891 --rc genhtml_branch_coverage=1 00:41:20.891 --rc genhtml_function_coverage=1 00:41:20.891 --rc genhtml_legend=1 00:41:20.891 --rc geninfo_all_blocks=1 00:41:20.891 --rc geninfo_unexecuted_blocks=1 00:41:20.891 00:41:20.891 ' 00:41:20.891 13:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.891 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:20.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:20.892 13:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.892 13:21:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:20.892 13:21:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.892 13:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.892 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:20.892 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:20.892 13:21:28 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:20.892 13:21:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:26.167 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:26.167 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:26.167 Found net devices under 0000:af:00.0: cvl_0_0 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:26.167 Found net devices under 0000:af:00.1: cvl_0_1 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:26.167 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:26.168 13:21:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:26.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:26.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:41:26.168 00:41:26.168 --- 10.0.0.2 ping statistics --- 00:41:26.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.168 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:26.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:26.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:41:26.168 00:41:26.168 --- 10.0.0.1 ping statistics --- 00:41:26.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.168 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:26.168 13:21:34 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:26.426 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:26.426 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.426 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:26.426 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:26.426 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:26.427 13:21:34 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:26.427 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:26.427 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:26.427 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:26.427 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:26.427 13:21:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:30.615 13:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:41:30.615 13:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:30.615 13:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:30.615 13:21:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1295204 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1295204 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1295204 ']' 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:34.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:34.802 [2024-12-15 13:21:42.540992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:34.802 [2024-12-15 13:21:42.541037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:34.802 [2024-12-15 13:21:42.601708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:34.802 [2024-12-15 13:21:42.625360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:34.802 [2024-12-15 13:21:42.625396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:34.802 [2024-12-15 13:21:42.625403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:34.802 [2024-12-15 13:21:42.625409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:34.802 [2024-12-15 13:21:42.625414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:34.802 [2024-12-15 13:21:42.626785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:34.802 [2024-12-15 13:21:42.626906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:34.802 [2024-12-15 13:21:42.626939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.802 [2024-12-15 13:21:42.626941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:34.802 INFO: Log level set to 20 00:41:34.802 INFO: Requests: 00:41:34.802 { 00:41:34.802 "jsonrpc": "2.0", 00:41:34.802 "method": "nvmf_set_config", 00:41:34.802 "id": 1, 00:41:34.802 "params": { 00:41:34.802 "admin_cmd_passthru": { 00:41:34.802 "identify_ctrlr": true 00:41:34.802 } 00:41:34.802 } 00:41:34.802 } 00:41:34.802 00:41:34.802 INFO: response: 00:41:34.802 { 00:41:34.802 "jsonrpc": "2.0", 00:41:34.802 "id": 1, 00:41:34.802 "result": true 00:41:34.802 } 00:41:34.802 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.802 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.802 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:34.802 INFO: Setting log level to 20 00:41:34.802 INFO: Setting log level to 20 00:41:34.802 INFO: Log level set to 20 00:41:34.802 INFO: Log level set to 20 00:41:34.802 INFO: Requests: 00:41:34.802 { 00:41:34.802 "jsonrpc": "2.0", 00:41:34.802 "method": "framework_start_init", 00:41:34.802 "id": 1 00:41:34.802 } 00:41:34.802 00:41:34.802 INFO: Requests: 00:41:34.802 { 00:41:34.802 "jsonrpc": "2.0", 00:41:34.802 "method": "framework_start_init", 00:41:34.802 "id": 1 00:41:34.802 } 00:41:34.802 00:41:35.061 [2024-12-15 13:21:42.766613] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:35.061 INFO: response: 00:41:35.061 { 00:41:35.061 "jsonrpc": "2.0", 00:41:35.061 "id": 1, 00:41:35.061 "result": true 00:41:35.061 } 00:41:35.061 00:41:35.061 INFO: response: 00:41:35.061 { 00:41:35.061 "jsonrpc": "2.0", 00:41:35.061 "id": 1, 00:41:35.061 "result": true 00:41:35.061 } 00:41:35.061 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.061 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:35.061 INFO: Setting log level to 40 00:41:35.061 INFO: Setting log level to 40 00:41:35.061 INFO: Setting log level to 40 00:41:35.061 [2024-12-15 13:21:42.779893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.061 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:35.061 13:21:42 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.061 13:21:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 Nvme0n1 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 [2024-12-15 13:21:45.697974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 [ 00:41:38.347 { 00:41:38.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:38.347 "subtype": "Discovery", 00:41:38.347 "listen_addresses": [], 00:41:38.347 "allow_any_host": true, 00:41:38.347 "hosts": [] 00:41:38.347 }, 00:41:38.347 { 00:41:38.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:38.347 "subtype": "NVMe", 00:41:38.347 "listen_addresses": [ 00:41:38.347 { 00:41:38.347 "trtype": "TCP", 00:41:38.347 "adrfam": "IPv4", 00:41:38.347 "traddr": "10.0.0.2", 00:41:38.347 "trsvcid": "4420" 00:41:38.347 } 00:41:38.347 ], 00:41:38.347 "allow_any_host": true, 00:41:38.347 "hosts": [], 00:41:38.347 "serial_number": "SPDK00000000000001", 00:41:38.347 "model_number": "SPDK bdev Controller", 00:41:38.347 "max_namespaces": 1, 00:41:38.347 "min_cntlid": 1, 00:41:38.347 "max_cntlid": 65519, 00:41:38.347 "namespaces": [ 00:41:38.347 { 00:41:38.347 "nsid": 1, 00:41:38.347 "bdev_name": "Nvme0n1", 00:41:38.347 "name": "Nvme0n1", 00:41:38.347 "nguid": "2534F19EB77544FD8F675EEA7DB4B2C0", 00:41:38.347 "uuid": "2534f19e-b775-44fd-8f67-5eea7db4b2c0" 00:41:38.347 } 00:41:38.347 ] 00:41:38.347 } 00:41:38.347 ] 00:41:38.347 13:21:45 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:38.347 13:21:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:38.347 13:21:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:38.347 rmmod nvme_tcp 00:41:38.347 rmmod nvme_fabrics 00:41:38.347 rmmod nvme_keyring 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1295204 ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1295204 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1295204 ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1295204 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295204 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295204' 00:41:38.347 killing process with pid 1295204 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1295204 00:41:38.347 13:21:46 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1295204 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:40.249 13:21:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:40.249 13:21:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:40.249 13:21:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.153 13:21:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:42.153 00:41:42.153 real 0m21.704s 00:41:42.153 user 0m27.696s 00:41:42.153 sys 0m5.220s 00:41:42.153 13:21:49 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.154 13:21:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:42.154 ************************************ 00:41:42.154 END TEST nvmf_identify_passthru 00:41:42.154 ************************************ 00:41:42.154 13:21:49 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:42.154 13:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:42.154 13:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.154 13:21:49 -- common/autotest_common.sh@10 -- # set +x 00:41:42.154 ************************************ 00:41:42.154 START TEST nvmf_dif 00:41:42.154 ************************************ 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:42.154 * Looking for test storage... 00:41:42.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:42.154 13:21:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.154 --rc genhtml_branch_coverage=1 00:41:42.154 --rc genhtml_function_coverage=1 00:41:42.154 --rc genhtml_legend=1 00:41:42.154 --rc geninfo_all_blocks=1 00:41:42.154 --rc geninfo_unexecuted_blocks=1 00:41:42.154 00:41:42.154 ' 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.154 --rc genhtml_branch_coverage=1 00:41:42.154 --rc genhtml_function_coverage=1 00:41:42.154 --rc genhtml_legend=1 00:41:42.154 --rc geninfo_all_blocks=1 00:41:42.154 --rc geninfo_unexecuted_blocks=1 00:41:42.154 00:41:42.154 ' 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.154 --rc genhtml_branch_coverage=1 00:41:42.154 --rc genhtml_function_coverage=1 00:41:42.154 --rc genhtml_legend=1 00:41:42.154 --rc geninfo_all_blocks=1 00:41:42.154 --rc geninfo_unexecuted_blocks=1 00:41:42.154 00:41:42.154 ' 00:41:42.154 13:21:49 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:42.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:42.154 --rc genhtml_branch_coverage=1 00:41:42.154 --rc genhtml_function_coverage=1 00:41:42.154 --rc genhtml_legend=1 00:41:42.154 --rc geninfo_all_blocks=1 00:41:42.154 --rc geninfo_unexecuted_blocks=1 00:41:42.154 00:41:42.154 ' 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:42.154 13:21:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:42.154 13:21:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.154 13:21:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.154 13:21:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.154 13:21:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.154 13:21:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.154 13:21:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.154 13:21:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:42.154 13:21:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:42.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:42.154 13:21:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.154 13:21:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:42.154 13:21:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:42.154 13:21:50 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:42.154 13:21:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:48.721 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:48.721 13:21:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:48.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:48.722 Found net devices under 0000:af:00.0: cvl_0_0 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:48.722 Found net devices under 0000:af:00.1: cvl_0_1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:48.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:48.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:41:48.722 00:41:48.722 --- 10.0.0.2 ping statistics --- 00:41:48.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:48.722 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:48.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:48.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:41:48.722 00:41:48.722 --- 10.0.0.1 ping statistics --- 00:41:48.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:48.722 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:48.722 13:21:55 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:50.627 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:50.627 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:50.627 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:50.887 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:50.887 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:50.887 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:50.887 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:50.887 13:21:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:50.887 13:21:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1300568 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1300568 00:41:50.887 13:21:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1300568 ']' 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:50.887 13:21:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.887 [2024-12-15 13:21:58.776490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:50.887 [2024-12-15 13:21:58.776532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:51.145 [2024-12-15 13:21:58.855574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.145 [2024-12-15 13:21:58.877374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:51.145 [2024-12-15 13:21:58.877411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:51.145 [2024-12-15 13:21:58.877418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:51.145 [2024-12-15 13:21:58.877424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:51.145 [2024-12-15 13:21:58.877429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:51.145 [2024-12-15 13:21:58.877958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.145 13:21:58 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:51.145 13:21:58 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:51.145 13:21:58 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:51.145 13:21:58 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:51.145 13:21:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:51.145 13:21:59 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:51.145 13:21:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:51.145 13:21:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:51.145 13:21:59 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.145 13:21:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:51.145 [2024-12-15 13:21:59.020432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:51.145 13:21:59 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.146 13:21:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:51.146 13:21:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:51.146 13:21:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:51.146 13:21:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:51.404 ************************************ 00:41:51.404 START TEST fio_dif_1_default 00:41:51.404 ************************************ 00:41:51.404 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:51.404 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:51.404 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:51.404 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:51.405 bdev_null0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:51.405 [2024-12-15 13:21:59.100800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:51.405 { 00:41:51.405 "params": { 00:41:51.405 "name": "Nvme$subsystem", 00:41:51.405 "trtype": "$TEST_TRANSPORT", 00:41:51.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:51.405 "adrfam": "ipv4", 00:41:51.405 "trsvcid": "$NVMF_PORT", 00:41:51.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:51.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:51.405 "hdgst": ${hdgst:-false}, 00:41:51.405 "ddgst": ${ddgst:-false} 00:41:51.405 }, 00:41:51.405 "method": "bdev_nvme_attach_controller" 00:41:51.405 } 00:41:51.405 EOF 00:41:51.405 )") 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:51.405 "params": { 00:41:51.405 "name": "Nvme0", 00:41:51.405 "trtype": "tcp", 00:41:51.405 "traddr": "10.0.0.2", 00:41:51.405 "adrfam": "ipv4", 00:41:51.405 "trsvcid": "4420", 00:41:51.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:51.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:51.405 "hdgst": false, 00:41:51.405 "ddgst": false 00:41:51.405 }, 00:41:51.405 "method": "bdev_nvme_attach_controller" 00:41:51.405 }' 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:51.405 13:21:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:51.664 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:51.664 fio-3.35 00:41:51.664 Starting 1 thread 00:42:03.871 00:42:03.871 filename0: (groupid=0, jobs=1): err= 0: pid=1300930: Sun Dec 15 13:22:10 2024 00:42:03.871 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10015msec) 00:42:03.871 slat (nsec): min=6087, max=31747, avg=6391.19, stdev=1234.32 00:42:03.871 clat (usec): min=40903, max=46484, avg=41194.27, stdev=515.81 00:42:03.871 lat (usec): min=40909, max=46516, avg=41200.66, stdev=516.20 00:42:03.871 clat percentiles (usec): 00:42:03.871 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:03.871 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:03.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:42:03.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:42:03.871 | 99.99th=[46400] 00:42:03.871 bw ( KiB/s): min= 351, max= 416, per=99.69%, avg=387.15, stdev=14.44, samples=20 00:42:03.871 iops : min= 87, max= 104, avg=96.75, stdev= 3.71, samples=20 00:42:03.871 lat (msec) : 50=100.00% 00:42:03.871 cpu : usr=92.48%, sys=7.26%, ctx=15, majf=0, minf=0 00:42:03.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.871 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:03.871 00:42:03.871 Run status group 0 (all jobs): 00:42:03.871 READ: bw=388KiB/s (398kB/s), 388KiB/s-388KiB/s (398kB/s-398kB/s), io=3888KiB (3981kB), run=10015-10015msec 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.871 00:42:03.871 real 0m11.230s 00:42:03.871 user 0m15.443s 00:42:03.871 sys 0m1.086s 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:03.871 13:22:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.871 ************************************ 00:42:03.871 END TEST fio_dif_1_default 00:42:03.871 ************************************ 00:42:03.871 13:22:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:03.871 13:22:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:03.872 13:22:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 ************************************ 00:42:03.872 START TEST fio_dif_1_multi_subsystems 00:42:03.872 ************************************ 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 bdev_null0 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 [2024-12-15 13:22:10.405046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 bdev_null1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:03.872 { 00:42:03.872 "params": { 00:42:03.872 "name": "Nvme$subsystem", 00:42:03.872 "trtype": "$TEST_TRANSPORT", 00:42:03.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.872 "adrfam": "ipv4", 00:42:03.872 "trsvcid": "$NVMF_PORT", 00:42:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.872 "hdgst": ${hdgst:-false}, 00:42:03.872 "ddgst": ${ddgst:-false} 00:42:03.872 }, 00:42:03.872 "method": "bdev_nvme_attach_controller" 00:42:03.872 } 00:42:03.872 EOF 00:42:03.872 )") 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:03.872 { 00:42:03.872 "params": { 00:42:03.872 "name": "Nvme$subsystem", 00:42:03.872 "trtype": "$TEST_TRANSPORT", 00:42:03.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.872 "adrfam": "ipv4", 00:42:03.872 "trsvcid": "$NVMF_PORT", 00:42:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.872 "hdgst": ${hdgst:-false}, 00:42:03.872 "ddgst": ${ddgst:-false} 00:42:03.872 }, 00:42:03.872 "method": "bdev_nvme_attach_controller" 00:42:03.872 } 00:42:03.872 EOF 00:42:03.872 )") 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:03.872 "params": { 00:42:03.872 "name": "Nvme0", 00:42:03.872 "trtype": "tcp", 00:42:03.872 "traddr": "10.0.0.2", 00:42:03.872 "adrfam": "ipv4", 00:42:03.872 "trsvcid": "4420", 00:42:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:03.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:03.872 "hdgst": false, 00:42:03.872 "ddgst": false 00:42:03.872 }, 00:42:03.872 "method": "bdev_nvme_attach_controller" 00:42:03.872 },{ 00:42:03.872 "params": { 00:42:03.872 "name": "Nvme1", 00:42:03.872 "trtype": "tcp", 00:42:03.872 "traddr": "10.0.0.2", 00:42:03.872 "adrfam": "ipv4", 00:42:03.872 "trsvcid": "4420", 00:42:03.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:03.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:03.872 "hdgst": false, 00:42:03.872 "ddgst": false 00:42:03.872 }, 00:42:03.872 "method": "bdev_nvme_attach_controller" 00:42:03.872 }' 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:03.872 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:03.873 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:03.873 13:22:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.873 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:03.873 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:03.873 fio-3.35 00:42:03.873 Starting 2 threads 00:42:13.998 00:42:13.998 filename0: (groupid=0, jobs=1): err= 0: pid=1302855: Sun Dec 15 13:22:21 2024 00:42:13.998 read: IOPS=201, BW=805KiB/s (824kB/s)(8064KiB/10023msec) 00:42:13.998 slat (nsec): min=6095, max=40860, avg=7724.86, stdev=3179.22 00:42:13.998 clat (usec): min=370, max=42607, avg=19863.58, stdev=20394.86 00:42:13.998 lat (usec): min=377, max=42614, avg=19871.31, stdev=20394.19 00:42:13.998 clat percentiles (usec): 00:42:13.998 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 424], 00:42:13.998 | 30.00th=[ 453], 40.00th=[ 586], 50.00th=[ 652], 60.00th=[40633], 00:42:13.998 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:13.998 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:13.998 | 99.99th=[42730] 00:42:13.998 bw ( KiB/s): min= 672, max= 1088, per=50.67%, avg=804.80, stdev=86.41, samples=20 00:42:13.998 iops : min= 168, max= 272, avg=201.20, stdev=21.60, samples=20 00:42:13.998 lat (usec) : 500=32.89%, 750=17.66%, 1000=1.39% 00:42:13.998 lat (msec) : 2=0.64%, 50=47.42% 00:42:13.998 cpu : usr=97.04%, sys=2.71%, ctx=13, majf=0, minf=51 00:42:13.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:13.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.998 issued rwts: total=2016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:13.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:13.998 filename1: (groupid=0, jobs=1): err= 0: pid=1302856: Sun Dec 15 13:22:21 2024 00:42:13.998 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10014msec) 00:42:13.998 slat (nsec): min=6094, max=43862, avg=7804.96, stdev=3522.26 00:42:13.998 clat (usec): min=364, max=42542, avg=20413.22, stdev=20341.28 00:42:13.998 lat (usec): min=370, max=42549, avg=20421.03, stdev=20340.51 00:42:13.998 clat percentiles (usec): 00:42:13.998 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 424], 00:42:13.998 | 30.00th=[ 506], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[40633], 00:42:13.998 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:13.998 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:13.998 | 99.99th=[42730] 00:42:13.998 bw ( KiB/s): min= 672, max= 832, per=49.28%, avg=782.40, stdev=44.63, samples=20 00:42:13.998 iops : min= 168, max= 208, avg=195.60, stdev=11.16, samples=20 00:42:13.998 lat (usec) : 500=29.13%, 750=21.48%, 1000=0.41% 00:42:13.999 lat (msec) : 50=48.98% 00:42:13.999 cpu : usr=96.83%, sys=2.90%, ctx=17, majf=0, minf=175 00:42:13.999 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:13.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.999 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:13.999 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:13.999 00:42:13.999 Run status group 0 (all jobs): 00:42:13.999 READ: bw=1587KiB/s (1625kB/s), 783KiB/s-805KiB/s (802kB/s-824kB/s), io=15.5MiB (16.3MB), run=10014-10023msec 00:42:14.257 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:14.257 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:14.257 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 00:42:14.258 real 0m11.597s 00:42:14.258 user 0m26.855s 00:42:14.258 sys 0m0.934s 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.258 13:22:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 ************************************ 00:42:14.258 END TEST fio_dif_1_multi_subsystems 00:42:14.258 ************************************ 00:42:14.258 13:22:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:14.258 13:22:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:14.258 13:22:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.258 13:22:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 ************************************ 00:42:14.258 START TEST fio_dif_rand_params 00:42:14.258 ************************************ 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 bdev_null0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.258 [2024-12-15 13:22:22.069351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:14.258 { 00:42:14.258 "params": { 00:42:14.258 "name": "Nvme$subsystem", 00:42:14.258 "trtype": "$TEST_TRANSPORT", 00:42:14.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:14.258 "adrfam": "ipv4", 00:42:14.258 "trsvcid": "$NVMF_PORT", 00:42:14.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:14.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:14.258 "hdgst": ${hdgst:-false}, 00:42:14.258 "ddgst": ${ddgst:-false} 00:42:14.258 }, 00:42:14.258 "method": "bdev_nvme_attach_controller" 00:42:14.258 } 00:42:14.258 EOF 00:42:14.258 )") 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:14.258 "params": { 00:42:14.258 "name": "Nvme0", 00:42:14.258 "trtype": "tcp", 00:42:14.258 "traddr": "10.0.0.2", 00:42:14.258 "adrfam": "ipv4", 00:42:14.258 "trsvcid": "4420", 00:42:14.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:14.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:14.258 "hdgst": false, 00:42:14.258 "ddgst": false 00:42:14.258 }, 00:42:14.258 "method": "bdev_nvme_attach_controller" 00:42:14.258 }' 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:14.258 13:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.826 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:14.826 ... 00:42:14.826 fio-3.35 00:42:14.826 Starting 3 threads 00:42:21.394 00:42:21.394 filename0: (groupid=0, jobs=1): err= 0: pid=1304774: Sun Dec 15 13:22:28 2024 00:42:21.394 read: IOPS=326, BW=40.8MiB/s (42.7MB/s)(206MiB/5046msec) 00:42:21.394 slat (nsec): min=6343, max=36824, avg=11856.30, stdev=4809.01 00:42:21.394 clat (usec): min=3537, max=89328, avg=9161.17, stdev=5693.14 00:42:21.394 lat (usec): min=3543, max=89340, avg=9173.03, stdev=5693.84 00:42:21.394 clat percentiles (usec): 00:42:21.394 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7504], 00:42:21.394 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8848], 00:42:21.394 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[10683], 00:42:21.394 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[89654], 00:42:21.394 | 99.99th=[89654] 00:42:21.394 bw ( KiB/s): min=33536, max=46336, per=35.42%, avg=42060.80, stdev=4722.03, samples=10 00:42:21.394 iops : min= 262, max= 362, avg=328.60, stdev=36.89, samples=10 00:42:21.394 lat (msec) : 4=0.24%, 10=87.72%, 20=10.33%, 50=1.09%, 100=0.61% 00:42:21.394 cpu : usr=94.35%, sys=5.35%, ctx=9, majf=0, minf=37 00:42:21.394 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 issued rwts: total=1645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:21.394 filename0: (groupid=0, jobs=1): err= 0: pid=1304775: Sun Dec 15 13:22:28 2024 00:42:21.394 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(181MiB/5044msec) 00:42:21.394 slat (nsec): min=6414, max=38443, avg=12465.22, stdev=5509.16 00:42:21.394 clat (usec): min=3399, max=89991, avg=10399.64, stdev=6303.64 00:42:21.394 lat (usec): min=3406, max=90002, avg=10412.10, stdev=6303.57 00:42:21.394 clat percentiles (usec): 00:42:21.394 | 1.00th=[ 3884], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 8225], 00:42:21.394 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10290], 00:42:21.394 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:42:21.394 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52691], 99.95th=[89654], 00:42:21.394 | 99.99th=[89654] 00:42:21.394 bw ( KiB/s): min=25344, max=45056, per=31.20%, avg=37043.20, stdev=5782.62, samples=10 00:42:21.394 iops : min= 198, max= 352, avg=289.40, stdev=45.18, samples=10 00:42:21.394 lat (msec) : 4=2.00%, 10=51.55%, 20=44.31%, 50=1.59%, 100=0.55% 00:42:21.394 cpu : usr=95.30%, sys=4.42%, ctx=11, majf=0, minf=43 00:42:21.394 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 issued rwts: total=1449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:21.394 filename0: (groupid=0, jobs=1): err= 0: pid=1304776: Sun Dec 15 13:22:28 2024 00:42:21.394 read: IOPS=317, BW=39.6MiB/s (41.6MB/s)(198MiB/5004msec) 00:42:21.394 slat (nsec): min=6263, max=62985, avg=12645.15, stdev=4626.22 00:42:21.394 clat (usec): min=3559, max=50202, avg=9442.26, stdev=4156.47 00:42:21.394 lat (usec): min=3568, max=50222, avg=9454.90, stdev=4156.52 00:42:21.394 clat percentiles (usec): 00:42:21.394 | 1.00th=[ 3621], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7767], 00:42:21.394 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:42:21.394 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:42:21.394 | 99.00th=[13566], 99.50th=[49021], 99.90th=[49546], 99.95th=[50070], 00:42:21.394 | 99.99th=[50070] 00:42:21.394 bw ( KiB/s): min=34560, max=46848, per=34.47%, avg=40931.56, stdev=3887.59, samples=9 00:42:21.394 iops : min= 270, max= 366, avg=319.78, stdev=30.37, samples=9 00:42:21.394 lat (msec) : 4=1.70%, 10=68.87%, 20=28.48%, 50=0.88%, 100=0.06% 00:42:21.394 cpu : usr=94.84%, sys=4.86%, ctx=10, majf=0, minf=57 00:42:21.394 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:21.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:21.394 issued rwts: total=1587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:21.394 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:21.394 00:42:21.394 Run status group 0 (all jobs): 00:42:21.394 READ: bw=116MiB/s (122MB/s), 35.9MiB/s-40.8MiB/s (37.7MB/s-42.7MB/s), io=585MiB (614MB), run=5004-5046msec 00:42:21.394 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:21.394 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:21.394 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 bdev_null0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 [2024-12-15 13:22:28.307940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 bdev_null1 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 bdev_null2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:21.395 { 00:42:21.395 "params": { 00:42:21.395 "name": "Nvme$subsystem", 00:42:21.395 "trtype": "$TEST_TRANSPORT", 00:42:21.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.395 "adrfam": "ipv4", 00:42:21.395 "trsvcid": "$NVMF_PORT", 00:42:21.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.395 "hdgst": ${hdgst:-false}, 00:42:21.395 "ddgst": ${ddgst:-false} 00:42:21.395 }, 00:42:21.395 "method": "bdev_nvme_attach_controller" 00:42:21.395 } 00:42:21.395 EOF 00:42:21.395 )") 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:21.395 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:21.395 { 00:42:21.395 "params": { 00:42:21.395 "name": "Nvme$subsystem", 00:42:21.395 "trtype": "$TEST_TRANSPORT", 00:42:21.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.395 "adrfam": "ipv4", 00:42:21.395 "trsvcid": "$NVMF_PORT", 00:42:21.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.395 "hdgst": ${hdgst:-false}, 00:42:21.395 "ddgst": ${ddgst:-false} 00:42:21.395 }, 00:42:21.396 "method": "bdev_nvme_attach_controller" 00:42:21.396 } 00:42:21.396 EOF 00:42:21.396 )") 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:21.396 { 00:42:21.396 "params": { 00:42:21.396 "name": "Nvme$subsystem", 00:42:21.396 "trtype": "$TEST_TRANSPORT", 00:42:21.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.396 "adrfam": "ipv4", 00:42:21.396 "trsvcid": "$NVMF_PORT", 00:42:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.396 "hdgst": ${hdgst:-false}, 00:42:21.396 "ddgst": ${ddgst:-false} 00:42:21.396 }, 00:42:21.396 "method": "bdev_nvme_attach_controller" 00:42:21.396 } 00:42:21.396 EOF 00:42:21.396 )") 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:21.396 "params": { 00:42:21.396 "name": "Nvme0", 00:42:21.396 "trtype": "tcp", 00:42:21.396 "traddr": "10.0.0.2", 00:42:21.396 "adrfam": "ipv4", 00:42:21.396 "trsvcid": "4420", 00:42:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.396 "hdgst": false, 00:42:21.396 "ddgst": false 00:42:21.396 }, 00:42:21.396 "method": "bdev_nvme_attach_controller" 00:42:21.396 },{ 00:42:21.396 "params": { 00:42:21.396 "name": "Nvme1", 00:42:21.396 "trtype": "tcp", 00:42:21.396 "traddr": "10.0.0.2", 00:42:21.396 "adrfam": "ipv4", 00:42:21.396 "trsvcid": "4420", 00:42:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:21.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:21.396 "hdgst": false, 00:42:21.396 "ddgst": false 00:42:21.396 }, 00:42:21.396 "method": "bdev_nvme_attach_controller" 00:42:21.396 },{ 00:42:21.396 "params": { 00:42:21.396 "name": "Nvme2", 00:42:21.396 "trtype": "tcp", 00:42:21.396 "traddr": "10.0.0.2", 00:42:21.396 "adrfam": "ipv4", 00:42:21.396 "trsvcid": "4420", 00:42:21.396 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:21.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:21.396 "hdgst": false, 00:42:21.396 "ddgst": false 00:42:21.396 }, 00:42:21.396 "method": "bdev_nvme_attach_controller" 00:42:21.396 }' 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:21.396 13:22:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.396 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:21.396 ... 00:42:21.396 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:21.396 ... 00:42:21.396 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:21.396 ... 00:42:21.396 fio-3.35 00:42:21.396 Starting 24 threads 00:42:33.599 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305799: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10020msec) 00:42:33.599 slat (usec): min=7, max=107, avg=35.82, stdev=27.21 00:42:33.599 clat (usec): min=8888, max=35400, avg=28912.07, stdev=1986.92 00:42:33.599 lat (usec): min=8907, max=35419, avg=28947.88, stdev=1975.56 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[17957], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.599 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28967], 00:42:33.599 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:42:33.599 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35390], 99.95th=[35390], 00:42:33.599 | 99.99th=[35390] 00:42:33.599 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.599 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.599 lat (msec) : 10=0.04%, 20=1.13%, 50=98.83% 00:42:33.599 cpu : usr=98.69%, sys=0.91%, ctx=13, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305800: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10020msec) 00:42:33.599 slat (usec): min=7, max=133, avg=34.62, stdev=15.55 00:42:33.599 clat (usec): min=4241, max=36333, avg=28934.86, stdev=1927.68 00:42:33.599 lat (usec): min=4250, max=36363, avg=28969.48, stdev=1924.90 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[17957], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.599 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31065], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:42:33.599 | 99.99th=[36439] 00:42:33.599 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.599 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.599 lat (msec) : 10=0.04%, 20=1.09%, 50=98.87% 00:42:33.599 cpu : usr=98.65%, sys=0.99%, ctx=15, majf=0, minf=11 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305801: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=541, BW=2164KiB/s (2216kB/s)(21.2MiB/10024msec) 00:42:33.599 slat (nsec): min=7519, max=76560, avg=22901.80, stdev=10915.17 00:42:33.599 clat (usec): min=27607, max=71297, avg=29356.41, stdev=2973.59 00:42:33.599 lat (usec): min=27672, max=71313, avg=29379.31, stdev=2972.92 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:42:33.599 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31065], 99.50th=[59507], 99.90th=[70779], 99.95th=[70779], 00:42:33.599 | 99.99th=[70779] 00:42:33.599 bw ( KiB/s): min= 1920, max= 2304, per=4.16%, avg=2163.20, stdev=109.09, samples=20 00:42:33.599 iops : min= 480, max= 576, avg=540.80, stdev=27.27, samples=20 00:42:33.599 lat (msec) : 50=99.41%, 100=0.59% 00:42:33.599 cpu : usr=98.12%, sys=1.21%, ctx=95, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305802: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10034msec) 00:42:33.599 slat (nsec): min=4675, max=75977, avg=18232.87, stdev=11893.85 00:42:33.599 clat (usec): min=15854, max=70517, avg=29360.15, stdev=2548.76 00:42:33.599 lat (usec): min=15862, max=70543, avg=29378.38, stdev=2547.53 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:42:33.599 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:42:33.599 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31327], 99.50th=[39584], 99.90th=[70779], 99.95th=[70779], 00:42:33.599 | 99.99th=[70779] 00:42:33.599 bw ( KiB/s): min= 1920, max= 2304, per=4.16%, avg=2167.35, stdev=116.49, samples=20 00:42:33.599 iops : min= 480, max= 576, avg=541.80, stdev=29.18, samples=20 00:42:33.599 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:42:33.599 cpu : usr=98.20%, sys=1.19%, ctx=111, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305803: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.2MiB/10065msec) 00:42:33.599 slat (nsec): min=6857, max=99889, avg=37113.86, stdev=16657.13 00:42:33.599 clat (usec): min=19798, max=83539, avg=29259.35, stdev=3307.65 00:42:33.599 lat (usec): min=19805, max=83550, avg=29296.46, stdev=3304.39 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.599 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:42:33.599 | 99.00th=[31327], 99.50th=[50070], 99.90th=[82314], 99.95th=[82314], 00:42:33.599 | 99.99th=[83362] 00:42:33.599 bw ( KiB/s): min= 1916, max= 2304, per=4.17%, avg=2169.40, stdev=106.17, samples=20 00:42:33.599 iops : min= 479, max= 576, avg=542.35, stdev=26.54, samples=20 00:42:33.599 lat (msec) : 20=0.04%, 50=99.43%, 100=0.53% 00:42:33.599 cpu : usr=98.75%, sys=0.90%, ctx=10, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305804: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=540, BW=2163KiB/s (2214kB/s)(21.2MiB/10062msec) 00:42:33.599 slat (nsec): min=4678, max=91015, avg=33136.46, stdev=20273.44 00:42:33.599 clat (usec): min=25107, max=93899, avg=29249.50, stdev=3752.40 00:42:33.599 lat (usec): min=25128, max=93936, avg=29282.64, stdev=3751.42 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.599 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:42:33.599 | 99.00th=[31065], 99.50th=[44827], 99.90th=[93848], 99.95th=[93848], 00:42:33.599 | 99.99th=[93848] 00:42:33.599 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2169.30, stdev=105.29, samples=20 00:42:33.599 iops : min= 480, max= 576, avg=542.25, stdev=26.42, samples=20 00:42:33.599 lat (msec) : 50=99.67%, 100=0.33% 00:42:33.599 cpu : usr=98.71%, sys=0.90%, ctx=13, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305805: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10021msec) 00:42:33.599 slat (nsec): min=8509, max=82988, avg=37149.83, stdev=15940.82 00:42:33.599 clat (usec): min=8910, max=36354, avg=28909.32, stdev=1929.58 00:42:33.599 lat (usec): min=8921, max=36397, avg=28946.47, stdev=1925.74 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[18220], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.599 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31065], 99.50th=[31065], 99.90th=[35914], 99.95th=[36439], 00:42:33.599 | 99.99th=[36439] 00:42:33.599 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.599 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.599 lat (msec) : 10=0.04%, 20=1.09%, 50=98.87% 00:42:33.599 cpu : usr=98.69%, sys=0.91%, ctx=41, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename0: (groupid=0, jobs=1): err= 0: pid=1305806: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=542, BW=2171KiB/s (2223kB/s)(21.4MiB/10081msec) 00:42:33.599 slat (usec): min=6, max=107, avg=44.15, stdev=22.68 00:42:33.599 clat (usec): min=15090, max=82329, avg=29061.31, stdev=3196.19 00:42:33.599 lat (usec): min=15115, max=82363, avg=29105.46, stdev=3192.75 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.599 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31065], 99.50th=[35914], 99.90th=[82314], 99.95th=[82314], 00:42:33.599 | 99.99th=[82314] 00:42:33.599 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2178.60, stdev=98.91, samples=20 00:42:33.599 iops : min= 512, max= 576, avg=544.65, stdev=24.73, samples=20 00:42:33.599 lat (msec) : 20=0.55%, 50=99.16%, 100=0.29% 00:42:33.599 cpu : usr=98.73%, sys=0.88%, ctx=13, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.599 filename1: (groupid=0, jobs=1): err= 0: pid=1305807: Sun Dec 15 13:22:39 2024 00:42:33.599 read: IOPS=541, BW=2165KiB/s (2217kB/s)(21.3MiB/10079msec) 00:42:33.599 slat (usec): min=6, max=107, avg=42.32, stdev=23.54 00:42:33.599 clat (usec): min=27448, max=83313, avg=29128.66, stdev=3114.19 00:42:33.599 lat (usec): min=27478, max=83327, avg=29170.97, stdev=3110.65 00:42:33.599 clat percentiles (usec): 00:42:33.599 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.599 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.599 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.599 | 99.00th=[31327], 99.50th=[35914], 99.90th=[82314], 99.95th=[82314], 00:42:33.599 | 99.99th=[83362] 00:42:33.599 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2173.00, stdev=102.61, samples=20 00:42:33.599 iops : min= 512, max= 576, avg=543.25, stdev=25.65, samples=20 00:42:33.599 lat (msec) : 50=99.71%, 100=0.29% 00:42:33.599 cpu : usr=98.57%, sys=1.03%, ctx=12, majf=0, minf=9 00:42:33.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.599 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305808: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.2MiB/10061msec) 00:42:33.600 slat (usec): min=8, max=135, avg=25.73, stdev=12.76 00:42:33.600 clat (usec): min=26856, max=94531, avg=29365.14, stdev=3758.13 00:42:33.600 lat (usec): min=26923, max=94557, avg=29390.87, stdev=3756.87 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:42:33.600 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[44303], 99.90th=[94897], 99.95th=[94897], 00:42:33.600 | 99.99th=[94897] 00:42:33.600 bw ( KiB/s): min= 1920, max= 2304, per=4.17%, avg=2169.60, stdev=105.67, samples=20 00:42:33.600 iops : min= 480, max= 576, avg=542.40, stdev=26.42, samples=20 00:42:33.600 lat (msec) : 50=99.71%, 100=0.29% 00:42:33.600 cpu : usr=98.29%, sys=1.06%, ctx=111, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305809: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10034msec) 00:42:33.600 slat (nsec): min=4395, max=82990, avg=27051.34, stdev=17633.78 00:42:33.600 clat (usec): min=14609, max=70794, avg=29238.36, stdev=3032.22 00:42:33.600 lat (usec): min=14627, max=70824, avg=29265.41, stdev=3030.29 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.600 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[39584], 99.50th=[45351], 99.90th=[70779], 99.95th=[70779], 00:42:33.600 | 99.99th=[70779] 00:42:33.600 bw ( KiB/s): min= 1920, max= 2304, per=4.16%, avg=2167.35, stdev=115.67, samples=20 00:42:33.600 iops : min= 480, max= 576, avg=541.80, stdev=28.97, samples=20 00:42:33.600 lat (msec) : 20=0.66%, 50=99.04%, 100=0.29% 00:42:33.600 cpu : usr=98.56%, sys=1.05%, ctx=13, majf=0, minf=9 00:42:33.600 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305810: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10020msec) 00:42:33.600 slat (usec): min=7, max=104, avg=30.39, stdev=22.63 00:42:33.600 clat (usec): min=10222, max=35863, avg=28984.85, stdev=1942.34 00:42:33.600 lat (usec): min=10242, max=35878, avg=29015.25, stdev=1935.69 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[17957], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:42:33.600 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28967], 00:42:33.600 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:42:33.600 | 99.99th=[35914] 00:42:33.600 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.600 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.600 lat (msec) : 20=1.17%, 50=98.83% 00:42:33.600 cpu : usr=98.64%, sys=0.96%, ctx=16, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305811: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.2MiB/10064msec) 00:42:33.600 slat (usec): min=7, max=107, avg=41.22, stdev=23.86 00:42:33.600 clat (usec): min=27502, max=82312, avg=29180.94, stdev=3302.15 00:42:33.600 lat (usec): min=27525, max=82358, avg=29222.16, stdev=3298.37 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.600 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[31327], 99.50th=[50070], 99.90th=[82314], 99.95th=[82314], 00:42:33.600 | 99.99th=[82314] 00:42:33.600 bw ( KiB/s): min= 1920, max= 2304, per=4.17%, avg=2169.60, stdev=105.67, samples=20 00:42:33.600 iops : min= 480, max= 576, avg=542.40, stdev=26.42, samples=20 00:42:33.600 lat (msec) : 50=99.50%, 100=0.50% 00:42:33.600 cpu : usr=98.80%, sys=0.81%, ctx=13, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305812: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=545, BW=2182KiB/s (2234kB/s)(21.3MiB/10003msec) 00:42:33.600 slat (usec): min=5, max=107, avg=45.68, stdev=22.41 00:42:33.600 clat (usec): min=17781, max=39188, avg=28923.89, stdev=1390.66 00:42:33.600 lat (usec): min=17796, max=39211, avg=28969.57, stdev=1381.67 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.600 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:42:33.600 | 99.99th=[39060] 00:42:33.600 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2176.00, stdev=95.41, samples=19 00:42:33.600 iops : min= 512, max= 576, avg=544.00, stdev=23.85, samples=19 00:42:33.600 lat (msec) : 20=0.29%, 50=99.71% 00:42:33.600 cpu : usr=98.59%, sys=1.02%, ctx=13, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305813: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=548, BW=2194KiB/s (2247kB/s)(21.4MiB/10006msec) 00:42:33.600 slat (nsec): min=7227, max=90984, avg=28067.63, stdev=19647.77 00:42:33.600 clat (usec): min=10579, max=31311, avg=28903.32, stdev=1970.11 00:42:33.600 lat (usec): min=10591, max=31331, avg=28931.39, stdev=1965.30 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[16057], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.600 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:42:33.600 | 99.99th=[31327] 00:42:33.600 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.600 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.600 lat (msec) : 20=1.17%, 50=98.83% 00:42:33.600 cpu : usr=98.84%, sys=0.75%, ctx=12, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename1: (groupid=0, jobs=1): err= 0: pid=1305814: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=540, BW=2163KiB/s (2214kB/s)(21.2MiB/10062msec) 00:42:33.600 slat (nsec): min=4196, max=90631, avg=32187.43, stdev=20140.69 00:42:33.600 clat (usec): min=27791, max=94507, avg=29252.71, stdev=3771.38 00:42:33.600 lat (usec): min=27807, max=94566, avg=29284.90, stdev=3770.54 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.600 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[44827], 99.90th=[94897], 99.95th=[94897], 00:42:33.600 | 99.99th=[94897] 00:42:33.600 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2169.30, stdev=105.29, samples=20 00:42:33.600 iops : min= 480, max= 576, avg=542.25, stdev=26.42, samples=20 00:42:33.600 lat (msec) : 50=99.71%, 100=0.29% 00:42:33.600 cpu : usr=98.66%, sys=0.93%, ctx=12, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.600 filename2: (groupid=0, jobs=1): err= 0: pid=1305815: Sun Dec 15 13:22:39 2024 00:42:33.600 read: IOPS=540, BW=2162KiB/s (2214kB/s)(21.2MiB/10065msec) 00:42:33.600 slat (nsec): min=4164, max=90984, avg=33796.65, stdev=20018.97 00:42:33.600 clat (usec): min=27764, max=96106, avg=29252.55, stdev=3767.99 00:42:33.600 lat (usec): min=27773, max=96119, avg=29286.35, stdev=3766.99 00:42:33.600 clat percentiles (usec): 00:42:33.600 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.600 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.600 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:42:33.600 | 99.00th=[31065], 99.50th=[45351], 99.90th=[93848], 99.95th=[93848], 00:42:33.600 | 99.99th=[95945] 00:42:33.600 bw ( KiB/s): min= 1920, max= 2304, per=4.17%, avg=2169.15, stdev=105.66, samples=20 00:42:33.600 iops : min= 480, max= 576, avg=542.25, stdev=26.42, samples=20 00:42:33.600 lat (msec) : 50=99.71%, 100=0.29% 00:42:33.600 cpu : usr=98.61%, sys=0.98%, ctx=18, majf=0, minf=9 00:42:33.600 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305816: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=540, BW=2163KiB/s (2214kB/s)(21.2MiB/10062msec) 00:42:33.601 slat (nsec): min=4517, max=90467, avg=32518.94, stdev=20047.88 00:42:33.601 clat (usec): min=27787, max=94460, avg=29253.10, stdev=3772.22 00:42:33.601 lat (usec): min=27803, max=94519, avg=29285.62, stdev=3771.38 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.601 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.601 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:42:33.601 | 99.00th=[31065], 99.50th=[45351], 99.90th=[93848], 99.95th=[94897], 00:42:33.601 | 99.99th=[94897] 00:42:33.601 bw ( KiB/s): min= 1923, max= 2304, per=4.17%, avg=2169.30, stdev=105.29, samples=20 00:42:33.601 iops : min= 480, max= 576, avg=542.25, stdev=26.42, samples=20 00:42:33.601 lat (msec) : 50=99.71%, 100=0.29% 00:42:33.601 cpu : usr=98.74%, sys=0.85%, ctx=14, majf=0, minf=9 00:42:33.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305817: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=542, BW=2171KiB/s (2223kB/s)(21.4MiB/10081msec) 00:42:33.601 slat (usec): min=5, max=107, avg=43.76, stdev=22.88 00:42:33.601 clat (usec): min=17783, max=82038, avg=29060.00, stdev=3182.76 00:42:33.601 lat (usec): min=17806, max=82091, avg=29103.76, stdev=3179.40 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.601 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.601 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[31065], 99.50th=[35914], 99.90th=[81265], 99.95th=[82314], 00:42:33.601 | 99.99th=[82314] 00:42:33.601 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2178.60, stdev=98.91, samples=20 00:42:33.601 iops : min= 512, max= 576, avg=544.65, stdev=24.73, samples=20 00:42:33.601 lat (msec) : 20=0.58%, 50=99.12%, 100=0.29% 00:42:33.601 cpu : usr=98.63%, sys=0.96%, ctx=14, majf=0, minf=9 00:42:33.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305818: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=542, BW=2171KiB/s (2224kB/s)(21.4MiB/10080msec) 00:42:33.601 slat (usec): min=6, max=115, avg=43.96, stdev=23.04 00:42:33.601 clat (usec): min=17821, max=82222, avg=29051.76, stdev=3202.26 00:42:33.601 lat (usec): min=17842, max=82255, avg=29095.72, stdev=3199.22 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.601 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.601 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[31065], 99.50th=[35914], 99.90th=[82314], 99.95th=[82314], 00:42:33.601 | 99.99th=[82314] 00:42:33.601 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2178.80, stdev=98.90, samples=20 00:42:33.601 iops : min= 512, max= 576, avg=544.70, stdev=24.73, samples=20 00:42:33.601 lat (msec) : 20=0.58%, 50=99.12%, 100=0.29% 00:42:33.601 cpu : usr=98.81%, sys=0.76%, ctx=61, majf=0, minf=9 00:42:33.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305819: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=546, BW=2187KiB/s (2240kB/s)(21.4MiB/10028msec) 00:42:33.601 slat (nsec): min=4256, max=83093, avg=26441.50, stdev=17748.95 00:42:33.601 clat (usec): min=15108, max=71209, avg=28990.08, stdev=3188.98 00:42:33.601 lat (usec): min=15120, max=71239, avg=29016.53, stdev=3188.33 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[19006], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.601 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:42:33.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[33817], 99.50th=[48497], 99.90th=[70779], 99.95th=[70779], 00:42:33.601 | 99.99th=[70779] 00:42:33.601 bw ( KiB/s): min= 2031, max= 2315, per=4.20%, avg=2186.10, stdev=103.73, samples=20 00:42:33.601 iops : min= 507, max= 578, avg=546.45, stdev=25.94, samples=20 00:42:33.601 lat (msec) : 20=1.64%, 50=98.03%, 100=0.33% 00:42:33.601 cpu : usr=98.59%, sys=1.01%, ctx=14, majf=0, minf=9 00:42:33.601 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305820: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10021msec) 00:42:33.601 slat (usec): min=8, max=107, avg=45.44, stdev=22.99 00:42:33.601 clat (usec): min=10281, max=36374, avg=28821.35, stdev=1975.97 00:42:33.601 lat (usec): min=10305, max=36422, avg=28866.79, stdev=1968.99 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[17957], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:42:33.601 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:42:33.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[36439], 00:42:33.601 | 99.99th=[36439] 00:42:33.601 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2188.80, stdev=91.93, samples=20 00:42:33.601 iops : min= 512, max= 576, avg=547.20, stdev=22.98, samples=20 00:42:33.601 lat (msec) : 20=1.17%, 50=98.83% 00:42:33.601 cpu : usr=98.71%, sys=0.87%, ctx=12, majf=0, minf=9 00:42:33.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305821: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=561, BW=2248KiB/s (2302kB/s)(22.0MiB/10023msec) 00:42:33.601 slat (usec): min=7, max=198, avg=14.04, stdev= 8.22 00:42:33.601 clat (usec): min=1285, max=31413, avg=28351.66, stdev=4689.15 00:42:33.601 lat (usec): min=1299, max=31429, avg=28365.70, stdev=4688.78 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[ 1631], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:42:33.601 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28705], 00:42:33.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:42:33.601 | 99.99th=[31327] 00:42:33.601 bw ( KiB/s): min= 2048, max= 3328, per=4.32%, avg=2246.40, stdev=270.65, samples=20 00:42:33.601 iops : min= 512, max= 832, avg=561.60, stdev=67.66, samples=20 00:42:33.601 lat (msec) : 2=1.95%, 4=0.60%, 20=1.14%, 50=96.31% 00:42:33.601 cpu : usr=97.33%, sys=1.64%, ctx=496, majf=0, minf=9 00:42:33.601 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 filename2: (groupid=0, jobs=1): err= 0: pid=1305822: Sun Dec 15 13:22:39 2024 00:42:33.601 read: IOPS=541, BW=2164KiB/s (2216kB/s)(21.2MiB/10024msec) 00:42:33.601 slat (nsec): min=7201, max=82679, avg=27805.76, stdev=17286.14 00:42:33.601 clat (usec): min=15598, max=74609, avg=29299.96, stdev=3047.02 00:42:33.601 lat (usec): min=15659, max=74646, avg=29327.77, stdev=3045.05 00:42:33.601 clat percentiles (usec): 00:42:33.601 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:42:33.601 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28967], 00:42:33.601 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:42:33.601 | 99.00th=[31065], 99.50th=[59507], 99.90th=[70779], 99.95th=[70779], 00:42:33.601 | 99.99th=[74974] 00:42:33.601 bw ( KiB/s): min= 1920, max= 2304, per=4.16%, avg=2163.20, stdev=109.09, samples=20 00:42:33.601 iops : min= 480, max= 576, avg=540.80, stdev=27.27, samples=20 00:42:33.601 lat (msec) : 20=0.04%, 50=99.37%, 100=0.59% 00:42:33.601 cpu : usr=98.71%, sys=0.88%, ctx=11, majf=0, minf=9 00:42:33.601 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:33.601 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:33.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:33.601 00:42:33.601 Run status group 0 (all jobs): 00:42:33.601 READ: bw=50.8MiB/s (53.3MB/s), 2162KiB/s-2248KiB/s (2214kB/s-2302kB/s), io=512MiB (537MB), run=10003-10081msec 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.601 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 bdev_null0 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 [2024-12-15 13:22:40.042919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 bdev_null1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.602 { 00:42:33.602 "params": { 00:42:33.602 "name": "Nvme$subsystem", 00:42:33.602 "trtype": "$TEST_TRANSPORT", 00:42:33.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.602 "adrfam": "ipv4", 00:42:33.602 "trsvcid": "$NVMF_PORT", 00:42:33.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.602 "hdgst": ${hdgst:-false}, 00:42:33.602 "ddgst": ${ddgst:-false} 00:42:33.602 }, 00:42:33.602 "method": "bdev_nvme_attach_controller" 00:42:33.602 } 00:42:33.602 EOF 00:42:33.602 )") 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.602 { 00:42:33.602 "params": { 00:42:33.602 "name": "Nvme$subsystem", 00:42:33.602 "trtype": "$TEST_TRANSPORT", 00:42:33.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.602 "adrfam": "ipv4", 00:42:33.602 "trsvcid": "$NVMF_PORT", 00:42:33.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.602 "hdgst": ${hdgst:-false}, 00:42:33.602 "ddgst": ${ddgst:-false} 00:42:33.602 }, 00:42:33.602 "method": "bdev_nvme_attach_controller" 00:42:33.602 } 00:42:33.602 EOF 00:42:33.602 )") 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:33.602 13:22:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:33.602 "params": { 00:42:33.602 "name": "Nvme0", 00:42:33.602 "trtype": "tcp", 00:42:33.602 "traddr": "10.0.0.2", 00:42:33.602 "adrfam": "ipv4", 00:42:33.602 "trsvcid": "4420", 00:42:33.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.603 "hdgst": false, 00:42:33.603 "ddgst": false 00:42:33.603 }, 00:42:33.603 "method": "bdev_nvme_attach_controller" 00:42:33.603 },{ 00:42:33.603 "params": { 00:42:33.603 "name": "Nvme1", 00:42:33.603 "trtype": "tcp", 00:42:33.603 "traddr": "10.0.0.2", 00:42:33.603 "adrfam": "ipv4", 00:42:33.603 "trsvcid": "4420", 00:42:33.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:33.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:33.603 "hdgst": false, 00:42:33.603 "ddgst": false 00:42:33.603 }, 00:42:33.603 "method": "bdev_nvme_attach_controller" 00:42:33.603 }' 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:33.603 13:22:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.603 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:33.603 ... 00:42:33.603 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:33.603 ... 00:42:33.603 fio-3.35 00:42:33.603 Starting 4 threads 00:42:38.872 00:42:38.872 filename0: (groupid=0, jobs=1): err= 0: pid=1307731: Sun Dec 15 13:22:46 2024 00:42:38.872 read: IOPS=2770, BW=21.6MiB/s (22.7MB/s)(108MiB/5003msec) 00:42:38.872 slat (nsec): min=6098, max=56702, avg=11446.72, stdev=6807.76 00:42:38.872 clat (usec): min=657, max=5559, avg=2852.08, stdev=426.97 00:42:38.872 lat (usec): min=668, max=5572, avg=2863.52, stdev=427.33 00:42:38.872 clat percentiles (usec): 00:42:38.872 | 1.00th=[ 1614], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2540], 00:42:38.872 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2966], 00:42:38.872 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 3425], 00:42:38.872 | 99.00th=[ 3982], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[ 5276], 00:42:38.872 | 99.99th=[ 5538] 00:42:38.872 bw ( KiB/s): min=20592, max=23104, per=26.36%, avg=22129.78, stdev=892.62, samples=9 00:42:38.872 iops : min= 2574, max= 2888, avg=2766.22, stdev=111.58, samples=9 00:42:38.872 lat (usec) : 750=0.01%, 1000=0.29% 00:42:38.872 lat (msec) : 2=2.40%, 4=96.34%, 10=0.96% 00:42:38.872 cpu : usr=97.02%, sys=2.66%, ctx=10, majf=0, minf=0 00:42:38.872 IO depths : 1=0.3%, 2=11.1%, 4=60.0%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 issued rwts: total=13859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:38.872 filename0: (groupid=0, jobs=1): err= 0: pid=1307732: Sun Dec 15 13:22:46 2024 00:42:38.872 read: IOPS=2578, BW=20.1MiB/s (21.1MB/s)(102MiB/5042msec) 00:42:38.872 slat (nsec): min=6107, max=61018, avg=12545.58, stdev=7780.96 00:42:38.872 clat (usec): min=485, max=41620, avg=3047.10, stdev=763.93 00:42:38.872 lat (usec): min=492, max=41634, avg=3059.65, stdev=763.81 00:42:38.872 clat percentiles (usec): 00:42:38.872 | 1.00th=[ 1827], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:42:38.872 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:42:38.872 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3949], 00:42:38.872 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5735], 99.95th=[ 5932], 00:42:38.872 | 99.99th=[41681] 00:42:38.872 bw ( KiB/s): min=19744, max=21504, per=24.78%, avg=20800.00, stdev=617.84, samples=10 00:42:38.872 iops : min= 2468, max= 2688, avg=2600.00, stdev=77.23, samples=10 00:42:38.872 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:42:38.872 lat (msec) : 2=1.51%, 4=94.01%, 10=4.41%, 50=0.02% 00:42:38.872 cpu : usr=97.16%, sys=2.52%, ctx=9, majf=0, minf=0 00:42:38.872 IO depths : 1=0.2%, 2=9.1%, 4=62.6%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 issued rwts: total=13003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:38.872 filename1: (groupid=0, jobs=1): err= 0: pid=1307733: Sun Dec 15 13:22:46 2024 00:42:38.872 read: IOPS=2599, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:42:38.872 slat (nsec): min=6115, max=64039, avg=13154.16, stdev=8036.50 00:42:38.872 clat (usec): min=707, max=5874, avg=3037.61, stdev=474.29 00:42:38.872 lat (usec): min=718, max=5883, avg=3050.77, stdev=474.25 00:42:38.872 clat percentiles (usec): 00:42:38.872 | 1.00th=[ 1860], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:42:38.872 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:42:38.872 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3556], 95.00th=[ 3916], 00:42:38.872 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5407], 00:42:38.872 | 99.99th=[ 5538] 00:42:38.872 bw ( KiB/s): min=19703, max=21440, per=24.72%, avg=20751.00, stdev=597.90, samples=9 00:42:38.872 iops : min= 2462, max= 2680, avg=2593.78, stdev=74.93, samples=9 00:42:38.872 lat (usec) : 750=0.02%, 1000=0.02% 00:42:38.872 lat (msec) : 2=1.45%, 4=94.42%, 10=4.10% 00:42:38.872 cpu : usr=92.58%, sys=5.12%, ctx=334, majf=0, minf=9 00:42:38.872 IO depths : 1=0.4%, 2=6.6%, 4=64.5%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 issued rwts: total=13002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:38.872 filename1: (groupid=0, jobs=1): err= 0: pid=1307734: Sun Dec 15 13:22:46 2024 00:42:38.872 read: IOPS=2608, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:42:38.872 slat (nsec): min=6138, max=61323, avg=16259.39, stdev=10806.56 00:42:38.872 clat (usec): min=617, max=6060, avg=3015.36, stdev=463.69 00:42:38.872 lat (usec): min=645, max=6072, avg=3031.62, stdev=464.09 00:42:38.872 clat percentiles (usec): 00:42:38.872 | 1.00th=[ 1926], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:42:38.872 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:42:38.872 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3818], 00:42:38.872 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5473], 00:42:38.872 | 99.99th=[ 5997] 00:42:38.872 bw ( KiB/s): min=20400, max=21584, per=24.93%, avg=20926.22, stdev=413.80, samples=9 00:42:38.872 iops : min= 2550, max= 2698, avg=2615.78, stdev=51.72, samples=9 00:42:38.872 lat (usec) : 750=0.05%, 1000=0.14% 00:42:38.872 lat (msec) : 2=0.95%, 4=95.47%, 10=3.40% 00:42:38.872 cpu : usr=97.56%, sys=2.08%, ctx=16, majf=0, minf=9 00:42:38.872 IO depths : 1=0.2%, 2=9.9%, 4=61.1%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:38.872 issued rwts: total=13046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:38.872 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:38.872 00:42:38.872 Run status group 0 (all jobs): 00:42:38.872 READ: bw=82.0MiB/s (86.0MB/s), 20.1MiB/s-21.6MiB/s (21.1MB/s-22.7MB/s), io=413MiB (433MB), run=5001-5042msec 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 00:42:38.872 real 0m24.577s 00:42:38.872 user 4m53.275s 00:42:38.872 sys 0m4.756s 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 ************************************ 00:42:38.872 END TEST fio_dif_rand_params 00:42:38.872 ************************************ 00:42:38.872 13:22:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:38.872 13:22:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:38.872 13:22:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 ************************************ 00:42:38.872 START TEST fio_dif_digest 00:42:38.872 ************************************ 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 bdev_null0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:38.872 [2024-12-15 13:22:46.715409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:38.872 { 00:42:38.872 "params": { 00:42:38.872 "name": "Nvme$subsystem", 00:42:38.872 "trtype": "$TEST_TRANSPORT", 00:42:38.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:38.872 "adrfam": "ipv4", 00:42:38.872 "trsvcid": "$NVMF_PORT", 00:42:38.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:38.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:38.872 "hdgst": ${hdgst:-false}, 00:42:38.872 "ddgst": ${ddgst:-false} 00:42:38.872 }, 00:42:38.872 "method": "bdev_nvme_attach_controller" 00:42:38.872 } 00:42:38.872 EOF 00:42:38.872 )") 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:38.872 13:22:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:38.872 "params": { 00:42:38.872 "name": "Nvme0", 00:42:38.872 "trtype": "tcp", 00:42:38.872 "traddr": "10.0.0.2", 00:42:38.873 "adrfam": "ipv4", 00:42:38.873 "trsvcid": "4420", 00:42:38.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:38.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:38.873 "hdgst": true, 00:42:38.873 "ddgst": true 00:42:38.873 }, 00:42:38.873 "method": "bdev_nvme_attach_controller" 00:42:38.873 }' 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:38.873 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:39.151 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:39.151 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:39.151 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:39.152 13:22:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:39.416 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:39.416 ... 00:42:39.416 fio-3.35 00:42:39.416 Starting 3 threads 00:42:51.613 00:42:51.613 filename0: (groupid=0, jobs=1): err= 0: pid=1308962: Sun Dec 15 13:22:57 2024 00:42:51.613 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10045msec) 00:42:51.613 slat (nsec): min=6364, max=31763, avg=11718.92, stdev=1611.67 00:42:51.613 clat (usec): min=8255, max=50533, avg=10270.51, stdev=1241.64 00:42:51.613 lat (usec): min=8269, max=50545, avg=10282.23, stdev=1241.60 00:42:51.613 clat percentiles (usec): 00:42:51.613 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:42:51.613 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:42:51.613 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:42:51.613 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13698], 99.95th=[49021], 00:42:51.613 | 99.99th=[50594] 00:42:51.613 bw ( KiB/s): min=36096, max=38144, per=35.36%, avg=37427.20, stdev=554.68, samples=20 00:42:51.613 iops : min= 282, max= 298, avg=292.40, stdev= 4.33, samples=20 00:42:51.613 lat (msec) : 10=36.02%, 20=63.91%, 50=0.03%, 100=0.03% 00:42:51.613 cpu : usr=94.60%, sys=5.11%, ctx=20, majf=0, minf=73 00:42:51.613 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 issued rwts: total=2926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.613 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.613 filename0: (groupid=0, jobs=1): err= 0: pid=1308963: Sun Dec 15 13:22:57 2024 00:42:51.613 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10045msec) 00:42:51.613 slat (nsec): min=6490, max=40240, avg=11880.12, stdev=1797.01 00:42:51.613 clat (usec): min=8868, max=46827, avg=11315.69, stdev=1222.09 00:42:51.613 lat (usec): min=8880, max=46840, avg=11327.57, stdev=1222.10 00:42:51.613 clat percentiles (usec): 00:42:51.613 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:42:51.613 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:42:51.613 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:42:51.613 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14746], 99.95th=[45351], 00:42:51.613 | 99.99th=[46924] 00:42:51.613 bw ( KiB/s): min=32768, max=34816, per=32.09%, avg=33971.20, stdev=440.28, samples=20 00:42:51.613 iops : min= 256, max= 272, avg=265.40, stdev= 3.44, samples=20 00:42:51.613 lat (msec) : 10=3.99%, 20=95.93%, 50=0.08% 00:42:51.613 cpu : usr=94.71%, sys=5.00%, ctx=19, majf=0, minf=31 00:42:51.613 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.613 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.613 filename0: (groupid=0, jobs=1): err= 0: pid=1308964: Sun Dec 15 13:22:57 2024 00:42:51.613 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10045msec) 00:42:51.613 slat (nsec): min=6516, max=57312, avg=11762.82, stdev=1817.92 00:42:51.613 clat (usec): min=7706, max=47939, avg=11029.98, stdev=1194.19 00:42:51.613 lat (usec): min=7718, max=47947, avg=11041.74, stdev=1194.13 00:42:51.613 clat percentiles (usec): 00:42:51.613 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:42:51.613 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:42:51.613 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:42:51.613 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13698], 99.95th=[44827], 00:42:51.613 | 99.99th=[47973] 00:42:51.613 bw ( KiB/s): min=34048, max=35584, per=32.93%, avg=34854.40, stdev=409.22, samples=20 00:42:51.613 iops : min= 266, max= 278, avg=272.30, stdev= 3.20, samples=20 00:42:51.613 lat (msec) : 10=7.38%, 20=92.55%, 50=0.07% 00:42:51.613 cpu : usr=94.36%, sys=5.35%, ctx=22, majf=0, minf=115 00:42:51.613 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.613 issued rwts: total=2725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.613 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.613 00:42:51.613 Run status group 0 (all jobs): 00:42:51.613 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-36.4MiB/s (34.7MB/s-38.2MB/s), io=1038MiB (1089MB), run=10045-10045msec 00:42:51.613 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:51.613 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:51.613 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.614 00:42:51.614 real 0m11.209s 00:42:51.614 user 0m35.033s 00:42:51.614 sys 0m1.895s 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:51.614 13:22:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:51.614 ************************************ 00:42:51.614 END TEST fio_dif_digest 00:42:51.614 ************************************ 00:42:51.614 13:22:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:51.614 13:22:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:51.614 rmmod nvme_tcp 00:42:51.614 rmmod nvme_fabrics 00:42:51.614 rmmod nvme_keyring 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1300568 ']' 00:42:51.614 13:22:57 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1300568 00:42:51.614 13:22:57 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1300568 ']' 00:42:51.614 13:22:57 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1300568 00:42:51.614 13:22:57 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:51.614 13:22:57 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:51.614 13:22:57 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1300568 00:42:51.614 13:22:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:51.614 13:22:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:51.614 13:22:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1300568' 00:42:51.614 killing process with pid 1300568 00:42:51.614 13:22:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1300568 00:42:51.614 13:22:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1300568 00:42:51.614 13:22:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:51.614 13:22:58 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:52.992 Waiting for block devices as requested 00:42:52.993 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:42:53.252 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:53.252 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:53.511 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:53.511 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:53.511 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:53.511 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:53.770 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:53.770 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:53.770 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:54.028 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:54.028 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:54.028 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:54.288 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:54.288 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:54.288 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:54.288 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:54.547 13:23:02 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.547 13:23:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:54.547 13:23:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.452 13:23:04 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:56.452 00:42:56.452 real 1m14.479s 00:42:56.452 user 7m10.714s 00:42:56.452 sys 0m20.387s 00:42:56.452 13:23:04 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.452 13:23:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:56.452 ************************************ 00:42:56.452 END TEST nvmf_dif 00:42:56.452 ************************************ 00:42:56.452 13:23:04 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:56.452 13:23:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:56.452 13:23:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:56.452 13:23:04 -- common/autotest_common.sh@10 -- # set +x 00:42:56.711 ************************************ 00:42:56.711 START TEST nvmf_abort_qd_sizes 00:42:56.711 ************************************ 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:56.711 * Looking for test storage... 00:42:56.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.711 --rc genhtml_branch_coverage=1 00:42:56.711 --rc genhtml_function_coverage=1 00:42:56.711 --rc genhtml_legend=1 00:42:56.711 --rc geninfo_all_blocks=1 00:42:56.711 --rc geninfo_unexecuted_blocks=1 00:42:56.711 00:42:56.711 ' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.711 --rc genhtml_branch_coverage=1 00:42:56.711 --rc genhtml_function_coverage=1 00:42:56.711 --rc genhtml_legend=1 00:42:56.711 --rc geninfo_all_blocks=1 00:42:56.711 --rc geninfo_unexecuted_blocks=1 00:42:56.711 00:42:56.711 ' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.711 --rc genhtml_branch_coverage=1 00:42:56.711 --rc genhtml_function_coverage=1 00:42:56.711 --rc genhtml_legend=1 00:42:56.711 --rc geninfo_all_blocks=1 00:42:56.711 --rc geninfo_unexecuted_blocks=1 00:42:56.711 00:42:56.711 ' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:56.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.711 --rc genhtml_branch_coverage=1 00:42:56.711 --rc genhtml_function_coverage=1 00:42:56.711 --rc genhtml_legend=1 00:42:56.711 --rc geninfo_all_blocks=1 00:42:56.711 --rc geninfo_unexecuted_blocks=1 00:42:56.711 00:42:56.711 ' 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:56.711 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:56.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:56.712 13:23:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:03.280 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:03.280 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:03.280 Found net devices under 0000:af:00.0: cvl_0_0 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:03.280 Found net devices under 0000:af:00.1: cvl_0_1 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:03.280 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:03.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:03.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:43:03.281 00:43:03.281 --- 10.0.0.2 ping statistics --- 00:43:03.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.281 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:03.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:03.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:43:03.281 00:43:03.281 --- 10.0.0.1 ping statistics --- 00:43:03.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:03.281 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:03.281 13:23:10 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:05.227 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:05.227 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:05.227 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:05.227 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:05.485 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:06.422 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1316609 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1316609 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1316609 ']' 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:06.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:06.422 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.422 [2024-12-15 13:23:14.272371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:06.422 [2024-12-15 13:23:14.272424] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:06.680 [2024-12-15 13:23:14.354448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:06.680 [2024-12-15 13:23:14.378895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:06.680 [2024-12-15 13:23:14.378936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:06.680 [2024-12-15 13:23:14.378943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:06.680 [2024-12-15 13:23:14.378948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:06.680 [2024-12-15 13:23:14.378954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:06.680 [2024-12-15 13:23:14.380372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:06.680 [2024-12-15 13:23:14.380479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:06.680 [2024-12-15 13:23:14.380591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.680 [2024-12-15 13:23:14.380591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:06.680 13:23:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.680 ************************************ 00:43:06.680 START TEST spdk_target_abort 00:43:06.680 ************************************ 00:43:06.680 13:23:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:06.680 13:23:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:06.680 13:23:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:06.680 13:23:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.680 13:23:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.956 spdk_targetn1 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.956 [2024-12-15 13:23:17.385620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.956 [2024-12-15 13:23:17.430527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:09.956 13:23:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:13.232 Initializing NVMe Controllers 00:43:13.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:13.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:13.232 Initialization complete. Launching workers. 00:43:13.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15789, failed: 0 00:43:13.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1415, failed to submit 14374 00:43:13.232 success 698, unsuccessful 717, failed 0 00:43:13.232 13:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:13.232 13:23:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:16.508 Initializing NVMe Controllers 00:43:16.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:16.508 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:16.508 Initialization complete. Launching workers. 00:43:16.508 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8568, failed: 0 00:43:16.508 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7322 00:43:16.508 success 324, unsuccessful 922, failed 0 00:43:16.508 13:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:16.508 13:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:19.783 Initializing NVMe Controllers 00:43:19.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:19.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:19.783 Initialization complete. Launching workers. 00:43:19.783 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38740, failed: 0 00:43:19.783 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2885, failed to submit 35855 00:43:19.783 success 606, unsuccessful 2279, failed 0 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.783 13:23:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1316609 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1316609 ']' 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1316609 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316609 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316609' 00:43:20.717 killing process with pid 1316609 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1316609 00:43:20.717 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1316609 00:43:20.976 00:43:20.976 real 0m14.141s 00:43:20.976 user 0m54.144s 00:43:20.976 sys 0m2.330s 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:20.976 ************************************ 00:43:20.976 END TEST spdk_target_abort 00:43:20.976 ************************************ 00:43:20.976 13:23:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:20.976 13:23:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:20.976 13:23:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:20.976 13:23:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:20.976 ************************************ 00:43:20.976 START TEST kernel_target_abort 00:43:20.976 ************************************ 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:20.976 13:23:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:23.511 Waiting for block devices as requested 00:43:23.770 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:23.770 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:24.029 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:24.029 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:24.029 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:24.029 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:24.289 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:24.289 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:24.289 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:24.550 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:24.550 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:24.550 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:24.550 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:24.809 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:24.809 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:24.809 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:25.068 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:25.068 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:25.069 No valid GPT data, bailing 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:25.069 13:23:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:25.328 00:43:25.328 Discovery Log Number of Records 2, Generation counter 2 00:43:25.328 =====Discovery Log Entry 0====== 00:43:25.328 trtype: tcp 00:43:25.328 adrfam: ipv4 00:43:25.328 subtype: current discovery subsystem 00:43:25.328 treq: not specified, sq flow control disable supported 00:43:25.328 portid: 1 00:43:25.328 trsvcid: 4420 00:43:25.328 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:25.328 traddr: 10.0.0.1 00:43:25.328 eflags: none 00:43:25.328 sectype: none 00:43:25.328 =====Discovery Log Entry 1====== 00:43:25.328 trtype: tcp 00:43:25.328 adrfam: ipv4 00:43:25.328 subtype: nvme subsystem 00:43:25.328 treq: not specified, sq flow control disable supported 00:43:25.328 portid: 1 00:43:25.328 trsvcid: 4420 00:43:25.328 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:25.328 traddr: 10.0.0.1 00:43:25.328 eflags: none 00:43:25.328 sectype: none 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:25.328 13:23:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:28.615 Initializing NVMe Controllers 00:43:28.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:28.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:28.615 Initialization complete. Launching workers. 00:43:28.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95734, failed: 0 00:43:28.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95734, failed to submit 0 00:43:28.615 success 0, unsuccessful 95734, failed 0 00:43:28.615 13:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:28.615 13:23:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:31.902 Initializing NVMe Controllers 00:43:31.902 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:31.902 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:31.902 Initialization complete. Launching workers. 00:43:31.902 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151046, failed: 0 00:43:31.902 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37910, failed to submit 113136 00:43:31.902 success 0, unsuccessful 37910, failed 0 00:43:31.902 13:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:31.902 13:23:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:35.192 Initializing NVMe Controllers 00:43:35.192 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:35.192 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:35.192 Initialization complete. Launching workers. 00:43:35.192 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141317, failed: 0 00:43:35.192 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35406, failed to submit 105911 00:43:35.192 success 0, unsuccessful 35406, failed 0 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:35.192 13:23:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:37.729 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:37.729 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:38.297 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:38.556 00:43:38.556 real 0m17.480s 00:43:38.556 user 0m9.181s 00:43:38.556 sys 0m5.003s 00:43:38.556 13:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:38.556 13:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:38.556 ************************************ 00:43:38.556 END TEST kernel_target_abort 00:43:38.556 ************************************ 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:38.556 rmmod nvme_tcp 00:43:38.556 rmmod nvme_fabrics 00:43:38.556 rmmod nvme_keyring 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1316609 ']' 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1316609 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1316609 ']' 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1316609 00:43:38.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1316609) - No such process 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1316609 is not found' 00:43:38.556 Process with pid 1316609 is not found 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:38.556 13:23:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:41.090 Waiting for block devices as requested 00:43:41.349 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:41.349 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:41.608 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:41.608 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:41.608 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:41.608 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:41.868 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:41.868 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:41.868 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:42.127 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:42.127 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:42.127 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:42.127 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:42.385 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:42.385 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:42.385 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:42.645 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:42.645 13:23:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:45.182 13:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:45.182 00:43:45.182 real 0m48.124s 00:43:45.182 user 1m7.697s 00:43:45.182 sys 0m15.929s 00:43:45.182 13:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:45.182 13:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:45.182 ************************************ 00:43:45.182 END TEST nvmf_abort_qd_sizes 00:43:45.182 ************************************ 00:43:45.182 13:23:52 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:45.182 13:23:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:45.182 13:23:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:45.182 13:23:52 -- common/autotest_common.sh@10 -- # set +x 00:43:45.182 ************************************ 00:43:45.182 START TEST keyring_file 00:43:45.182 ************************************ 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:45.182 * Looking for test storage... 00:43:45.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:45.182 13:23:52 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:45.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.182 --rc genhtml_branch_coverage=1 00:43:45.182 --rc genhtml_function_coverage=1 00:43:45.182 --rc genhtml_legend=1 00:43:45.182 --rc geninfo_all_blocks=1 00:43:45.182 --rc geninfo_unexecuted_blocks=1 00:43:45.182 00:43:45.182 ' 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:45.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.182 --rc genhtml_branch_coverage=1 00:43:45.182 --rc genhtml_function_coverage=1 00:43:45.182 --rc genhtml_legend=1 00:43:45.182 --rc geninfo_all_blocks=1 00:43:45.182 --rc geninfo_unexecuted_blocks=1 00:43:45.182 00:43:45.182 ' 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:45.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.182 --rc genhtml_branch_coverage=1 00:43:45.182 --rc genhtml_function_coverage=1 00:43:45.182 --rc genhtml_legend=1 00:43:45.182 --rc geninfo_all_blocks=1 00:43:45.182 --rc geninfo_unexecuted_blocks=1 00:43:45.182 00:43:45.182 ' 00:43:45.182 13:23:52 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:45.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:45.182 --rc genhtml_branch_coverage=1 00:43:45.182 --rc genhtml_function_coverage=1 00:43:45.182 --rc genhtml_legend=1 00:43:45.182 --rc geninfo_all_blocks=1 00:43:45.182 --rc geninfo_unexecuted_blocks=1 00:43:45.182 00:43:45.182 ' 00:43:45.182 13:23:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:45.182 13:23:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:45.182 13:23:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:45.183 13:23:52 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:45.183 13:23:52 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:45.183 13:23:52 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:45.183 13:23:52 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:45.183 13:23:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.183 13:23:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.183 13:23:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.183 13:23:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:45.183 13:23:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:45.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.r8WTnl5OF1 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.r8WTnl5OF1 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.r8WTnl5OF1 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.r8WTnl5OF1 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mkPl4NXTUO 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:45.183 13:23:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mkPl4NXTUO 00:43:45.183 13:23:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mkPl4NXTUO 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mkPl4NXTUO 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=1325185 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:45.183 13:23:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1325185 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325185 ']' 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:45.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:45.183 13:23:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:45.183 [2024-12-15 13:23:52.934209] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:45.183 [2024-12-15 13:23:52.934259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325185 ] 00:43:45.183 [2024-12-15 13:23:53.011283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:45.183 [2024-12-15 13:23:53.033728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:45.442 13:23:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:45.442 [2024-12-15 13:23:53.231448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:45.442 null0 00:43:45.442 [2024-12-15 13:23:53.263500] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:45.442 [2024-12-15 13:23:53.263775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.442 13:23:53 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:45.442 [2024-12-15 13:23:53.295572] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:45.442 request: 00:43:45.442 { 00:43:45.442 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:45.442 "secure_channel": false, 00:43:45.442 "listen_address": { 00:43:45.442 "trtype": "tcp", 00:43:45.442 "traddr": "127.0.0.1", 00:43:45.442 "trsvcid": "4420" 00:43:45.442 }, 00:43:45.442 "method": "nvmf_subsystem_add_listener", 00:43:45.442 "req_id": 1 00:43:45.442 } 00:43:45.442 Got JSON-RPC error response 00:43:45.442 response: 00:43:45.442 { 00:43:45.442 "code": -32602, 00:43:45.442 "message": "Invalid parameters" 00:43:45.442 } 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:45.442 13:23:53 keyring_file -- keyring/file.sh@47 -- # bperfpid=1325201 00:43:45.442 13:23:53 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1325201 /var/tmp/bperf.sock 00:43:45.442 13:23:53 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325201 ']' 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:45.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:45.442 13:23:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:45.701 [2024-12-15 13:23:53.351415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:45.701 [2024-12-15 13:23:53.351454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325201 ] 00:43:45.701 [2024-12-15 13:23:53.426110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:45.701 [2024-12-15 13:23:53.447845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:45.701 13:23:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:45.701 13:23:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:45.701 13:23:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:45.701 13:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:45.960 13:23:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mkPl4NXTUO 00:43:45.960 13:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mkPl4NXTUO 00:43:46.219 13:23:53 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:46.219 13:23:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:46.219 13:23:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.219 13:23:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:46.219 13:23:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.479 13:23:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.r8WTnl5OF1 == \/\t\m\p\/\t\m\p\.\r\8\W\T\n\l\5\O\F\1 ]] 00:43:46.479 13:23:54 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:46.479 13:23:54 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.479 13:23:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mkPl4NXTUO == \/\t\m\p\/\t\m\p\.\m\k\P\l\4\N\X\T\U\O ]] 00:43:46.479 13:23:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:46.479 13:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.737 13:23:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:46.737 13:23:54 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:46.737 13:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:46.737 13:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:46.737 13:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:46.737 13:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:46.737 13:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.996 13:23:54 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:46.996 13:23:54 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:46.996 13:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:46.996 [2024-12-15 13:23:54.893488] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:47.258 nvme0n1 00:43:47.258 13:23:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:47.258 13:23:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:47.258 13:23:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:47.258 13:23:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:47.258 13:23:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:47.258 13:23:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:47.517 13:23:55 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:47.517 13:23:55 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:47.517 13:23:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:47.517 13:23:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:47.517 13:23:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:47.517 13:23:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:47.517 13:23:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:47.517 13:23:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:47.517 13:23:55 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:47.776 Running I/O for 1 seconds... 00:43:48.713 19403.00 IOPS, 75.79 MiB/s 00:43:48.713 Latency(us) 00:43:48.713 [2024-12-15T12:23:56.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:48.713 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:48.713 nvme0n1 : 1.00 19454.73 76.00 0.00 0.00 6567.75 2449.80 18100.42 00:43:48.713 [2024-12-15T12:23:56.620Z] =================================================================================================================== 00:43:48.713 [2024-12-15T12:23:56.620Z] Total : 19454.73 76.00 0.00 0.00 6567.75 2449.80 18100.42 00:43:48.713 { 00:43:48.713 "results": [ 00:43:48.713 { 00:43:48.713 "job": "nvme0n1", 00:43:48.713 "core_mask": "0x2", 00:43:48.713 "workload": "randrw", 00:43:48.713 "percentage": 50, 00:43:48.713 "status": "finished", 00:43:48.713 "queue_depth": 128, 00:43:48.713 "io_size": 4096, 00:43:48.713 "runtime": 1.004023, 00:43:48.713 "iops": 19454.733606700243, 00:43:48.713 "mibps": 75.99505315117283, 00:43:48.713 "io_failed": 0, 00:43:48.713 "io_timeout": 0, 00:43:48.713 "avg_latency_us": 6567.753908623502, 00:43:48.713 "min_latency_us": 2449.7980952380954, 00:43:48.713 "max_latency_us": 18100.41904761905 00:43:48.713 } 00:43:48.713 ], 00:43:48.713 "core_count": 1 00:43:48.713 } 00:43:48.713 13:23:56 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:48.713 13:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:49.036 13:23:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.036 13:23:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:49.036 13:23:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:49.036 13:23:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.310 13:23:57 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:49.310 13:23:57 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:49.310 13:23:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:49.310 13:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:49.569 [2024-12-15 13:23:57.275795] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:49.569 [2024-12-15 13:23:57.276527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa456a0 (107): Transport endpoint is not connected 00:43:49.569 [2024-12-15 13:23:57.277521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa456a0 (9): Bad file descriptor 00:43:49.569 [2024-12-15 13:23:57.278522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:49.569 [2024-12-15 13:23:57.278533] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:49.569 [2024-12-15 13:23:57.278540] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:49.569 [2024-12-15 13:23:57.278550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:49.569 request: 00:43:49.569 { 00:43:49.569 "name": "nvme0", 00:43:49.569 "trtype": "tcp", 00:43:49.569 "traddr": "127.0.0.1", 00:43:49.569 "adrfam": "ipv4", 00:43:49.569 "trsvcid": "4420", 00:43:49.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:49.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:49.569 "prchk_reftag": false, 00:43:49.569 "prchk_guard": false, 00:43:49.569 "hdgst": false, 00:43:49.569 "ddgst": false, 00:43:49.569 "psk": "key1", 00:43:49.569 "allow_unrecognized_csi": false, 00:43:49.569 "method": "bdev_nvme_attach_controller", 00:43:49.569 "req_id": 1 00:43:49.569 } 00:43:49.569 Got JSON-RPC error response 00:43:49.569 response: 00:43:49.569 { 00:43:49.569 "code": -5, 00:43:49.569 "message": "Input/output error" 00:43:49.569 } 00:43:49.569 13:23:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:49.569 13:23:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:49.569 13:23:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:49.569 13:23:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:49.569 13:23:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:49.569 13:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:49.569 13:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.569 13:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.569 13:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:49.569 13:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.827 13:23:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:49.827 13:23:57 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:49.827 13:23:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:49.827 13:23:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:49.827 13:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:50.086 13:23:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:50.086 13:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:50.345 13:23:58 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:50.345 13:23:58 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:50.345 13:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:50.603 13:23:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:50.603 13:23:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.r8WTnl5OF1 00:43:50.603 13:23:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:50.603 13:23:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.603 13:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.603 [2024-12-15 13:23:58.431666] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.r8WTnl5OF1': 0100660 00:43:50.603 [2024-12-15 13:23:58.431693] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:50.603 request: 00:43:50.604 { 00:43:50.604 "name": "key0", 00:43:50.604 "path": "/tmp/tmp.r8WTnl5OF1", 00:43:50.604 "method": "keyring_file_add_key", 00:43:50.604 "req_id": 1 00:43:50.604 } 00:43:50.604 Got JSON-RPC error response 00:43:50.604 response: 00:43:50.604 { 00:43:50.604 "code": -1, 00:43:50.604 "message": "Operation not permitted" 00:43:50.604 } 00:43:50.604 13:23:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:50.604 13:23:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:50.604 13:23:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:50.604 13:23:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:50.604 13:23:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.r8WTnl5OF1 00:43:50.604 13:23:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.604 13:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r8WTnl5OF1 00:43:50.862 13:23:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.r8WTnl5OF1 00:43:50.862 13:23:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:50.862 13:23:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:50.862 13:23:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:50.862 13:23:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:50.862 13:23:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:50.862 13:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:51.121 13:23:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:51.121 13:23:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:51.121 13:23:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.121 13:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.122 [2024-12-15 13:23:59.025223] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.r8WTnl5OF1': No such file or directory 00:43:51.122 [2024-12-15 13:23:59.025242] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:51.122 [2024-12-15 13:23:59.025258] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:51.122 [2024-12-15 13:23:59.025265] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:51.122 [2024-12-15 13:23:59.025273] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:51.122 [2024-12-15 13:23:59.025279] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:51.381 request: 00:43:51.381 { 00:43:51.381 "name": "nvme0", 00:43:51.381 "trtype": "tcp", 00:43:51.381 "traddr": "127.0.0.1", 00:43:51.381 "adrfam": "ipv4", 00:43:51.381 "trsvcid": "4420", 00:43:51.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:51.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:51.381 "prchk_reftag": false, 00:43:51.381 "prchk_guard": false, 00:43:51.381 "hdgst": false, 00:43:51.381 "ddgst": false, 00:43:51.381 "psk": "key0", 00:43:51.381 "allow_unrecognized_csi": false, 00:43:51.381 "method": "bdev_nvme_attach_controller", 00:43:51.381 "req_id": 1 00:43:51.381 } 00:43:51.381 Got JSON-RPC error response 00:43:51.381 response: 00:43:51.381 { 00:43:51.381 "code": -19, 00:43:51.381 "message": "No such device" 00:43:51.381 } 00:43:51.381 13:23:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:51.381 13:23:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:51.381 13:23:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:51.381 13:23:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:51.381 13:23:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:51.381 13:23:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nQ1wM7HyQE 00:43:51.381 13:23:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:51.381 13:23:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:51.640 13:23:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nQ1wM7HyQE 00:43:51.640 13:23:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nQ1wM7HyQE 00:43:51.640 13:23:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nQ1wM7HyQE 00:43:51.640 13:23:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ1wM7HyQE 00:43:51.640 13:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nQ1wM7HyQE 00:43:51.640 13:23:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.640 13:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:51.898 nvme0n1 00:43:51.898 13:23:59 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:51.898 13:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:51.898 13:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:51.898 13:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:51.898 13:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:51.898 13:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:52.157 13:23:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:52.157 13:23:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:52.157 13:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:52.416 13:24:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:52.416 13:24:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:52.416 13:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:52.416 13:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:52.416 13:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:52.675 13:24:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:52.675 13:24:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:52.675 13:24:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:52.675 13:24:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:52.675 13:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:52.675 13:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:52.675 13:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:52.933 13:24:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:52.933 13:24:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:52.933 13:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:52.933 13:24:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:52.933 13:24:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:52.933 13:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:53.191 13:24:00 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:53.191 13:24:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nQ1wM7HyQE 00:43:53.191 13:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nQ1wM7HyQE 00:43:53.450 13:24:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mkPl4NXTUO 00:43:53.450 13:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mkPl4NXTUO 00:43:53.709 13:24:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:53.709 13:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:53.968 nvme0n1 00:43:53.968 13:24:01 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:53.968 13:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:54.227 13:24:01 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:54.228 "subsystems": [ 00:43:54.228 { 00:43:54.228 "subsystem": "keyring", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "keyring_file_add_key", 00:43:54.228 "params": { 00:43:54.228 "name": "key0", 00:43:54.228 "path": "/tmp/tmp.nQ1wM7HyQE" 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "keyring_file_add_key", 00:43:54.228 "params": { 00:43:54.228 "name": "key1", 00:43:54.228 "path": "/tmp/tmp.mkPl4NXTUO" 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "iobuf", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "iobuf_set_options", 00:43:54.228 "params": { 00:43:54.228 "small_pool_count": 8192, 00:43:54.228 "large_pool_count": 1024, 00:43:54.228 "small_bufsize": 8192, 00:43:54.228 "large_bufsize": 135168, 00:43:54.228 "enable_numa": false 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "sock", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "sock_set_default_impl", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "posix" 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "sock_impl_set_options", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "ssl", 00:43:54.228 "recv_buf_size": 4096, 00:43:54.228 "send_buf_size": 4096, 00:43:54.228 "enable_recv_pipe": true, 00:43:54.228 "enable_quickack": false, 00:43:54.228 "enable_placement_id": 0, 00:43:54.228 "enable_zerocopy_send_server": true, 00:43:54.228 "enable_zerocopy_send_client": false, 00:43:54.228 "zerocopy_threshold": 0, 00:43:54.228 "tls_version": 0, 00:43:54.228 "enable_ktls": false 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "sock_impl_set_options", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "posix", 00:43:54.228 "recv_buf_size": 2097152, 00:43:54.228 "send_buf_size": 2097152, 00:43:54.228 "enable_recv_pipe": true, 00:43:54.228 "enable_quickack": false, 00:43:54.228 "enable_placement_id": 0, 00:43:54.228 "enable_zerocopy_send_server": true, 00:43:54.228 "enable_zerocopy_send_client": false, 00:43:54.228 "zerocopy_threshold": 0, 00:43:54.228 "tls_version": 0, 00:43:54.228 "enable_ktls": false 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "vmd", 00:43:54.228 "config": [] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "accel", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "accel_set_options", 00:43:54.228 "params": { 00:43:54.228 "small_cache_size": 128, 00:43:54.228 "large_cache_size": 16, 00:43:54.228 "task_count": 2048, 00:43:54.228 "sequence_count": 2048, 00:43:54.228 "buf_count": 2048 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "bdev", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "bdev_set_options", 00:43:54.228 "params": { 00:43:54.228 "bdev_io_pool_size": 65535, 00:43:54.228 "bdev_io_cache_size": 256, 00:43:54.228 "bdev_auto_examine": true, 00:43:54.228 "iobuf_small_cache_size": 128, 00:43:54.228 "iobuf_large_cache_size": 16 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_raid_set_options", 00:43:54.228 "params": { 00:43:54.228 "process_window_size_kb": 1024, 00:43:54.228 "process_max_bandwidth_mb_sec": 0 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_iscsi_set_options", 00:43:54.228 "params": { 00:43:54.228 "timeout_sec": 30 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_nvme_set_options", 00:43:54.228 "params": { 00:43:54.228 "action_on_timeout": "none", 00:43:54.228 "timeout_us": 0, 00:43:54.228 "timeout_admin_us": 0, 00:43:54.228 "keep_alive_timeout_ms": 10000, 00:43:54.228 "arbitration_burst": 0, 00:43:54.228 "low_priority_weight": 0, 00:43:54.228 "medium_priority_weight": 0, 00:43:54.228 "high_priority_weight": 0, 00:43:54.228 "nvme_adminq_poll_period_us": 10000, 00:43:54.228 "nvme_ioq_poll_period_us": 0, 00:43:54.228 "io_queue_requests": 512, 00:43:54.228 "delay_cmd_submit": true, 00:43:54.228 "transport_retry_count": 4, 00:43:54.228 "bdev_retry_count": 3, 00:43:54.228 "transport_ack_timeout": 0, 00:43:54.228 "ctrlr_loss_timeout_sec": 0, 00:43:54.228 "reconnect_delay_sec": 0, 00:43:54.228 "fast_io_fail_timeout_sec": 0, 00:43:54.228 "disable_auto_failback": false, 00:43:54.228 "generate_uuids": false, 00:43:54.228 "transport_tos": 0, 00:43:54.228 "nvme_error_stat": false, 00:43:54.228 "rdma_srq_size": 0, 00:43:54.228 "io_path_stat": false, 00:43:54.228 "allow_accel_sequence": false, 00:43:54.228 "rdma_max_cq_size": 0, 00:43:54.228 "rdma_cm_event_timeout_ms": 0, 00:43:54.228 "dhchap_digests": [ 00:43:54.228 "sha256", 00:43:54.228 "sha384", 00:43:54.228 "sha512" 00:43:54.228 ], 00:43:54.228 "dhchap_dhgroups": [ 00:43:54.228 "null", 00:43:54.228 "ffdhe2048", 00:43:54.228 "ffdhe3072", 00:43:54.228 "ffdhe4096", 00:43:54.228 "ffdhe6144", 00:43:54.228 "ffdhe8192" 00:43:54.228 ], 00:43:54.228 "rdma_umr_per_io": false 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_nvme_attach_controller", 00:43:54.228 "params": { 00:43:54.228 "name": "nvme0", 00:43:54.228 "trtype": "TCP", 00:43:54.228 "adrfam": "IPv4", 00:43:54.228 "traddr": "127.0.0.1", 00:43:54.228 "trsvcid": "4420", 00:43:54.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:54.228 "prchk_reftag": false, 00:43:54.228 "prchk_guard": false, 00:43:54.228 "ctrlr_loss_timeout_sec": 0, 00:43:54.228 "reconnect_delay_sec": 0, 00:43:54.228 "fast_io_fail_timeout_sec": 0, 00:43:54.228 "psk": "key0", 00:43:54.228 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:54.228 "hdgst": false, 00:43:54.228 "ddgst": false, 00:43:54.228 "multipath": "multipath" 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_nvme_set_hotplug", 00:43:54.228 "params": { 00:43:54.228 "period_us": 100000, 00:43:54.228 "enable": false 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "bdev_wait_for_examine" 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "nbd", 00:43:54.228 "config": [] 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }' 00:43:54.228 13:24:01 keyring_file -- keyring/file.sh@115 -- # killprocess 1325201 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325201 ']' 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325201 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325201 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325201' 00:43:54.228 killing process with pid 1325201 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@973 -- # kill 1325201 00:43:54.228 Received shutdown signal, test time was about 1.000000 seconds 00:43:54.228 00:43:54.228 Latency(us) 00:43:54.228 [2024-12-15T12:24:02.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.228 [2024-12-15T12:24:02.135Z] =================================================================================================================== 00:43:54.228 [2024-12-15T12:24:02.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:54.228 13:24:01 keyring_file -- common/autotest_common.sh@978 -- # wait 1325201 00:43:54.228 13:24:02 keyring_file -- keyring/file.sh@118 -- # bperfpid=1326806 00:43:54.228 13:24:02 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1326806 /var/tmp/bperf.sock 00:43:54.228 13:24:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1326806 ']' 00:43:54.228 13:24:02 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:54.228 13:24:02 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:54.228 "subsystems": [ 00:43:54.228 { 00:43:54.228 "subsystem": "keyring", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "keyring_file_add_key", 00:43:54.228 "params": { 00:43:54.228 "name": "key0", 00:43:54.228 "path": "/tmp/tmp.nQ1wM7HyQE" 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "keyring_file_add_key", 00:43:54.228 "params": { 00:43:54.228 "name": "key1", 00:43:54.228 "path": "/tmp/tmp.mkPl4NXTUO" 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "iobuf", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "iobuf_set_options", 00:43:54.228 "params": { 00:43:54.228 "small_pool_count": 8192, 00:43:54.228 "large_pool_count": 1024, 00:43:54.228 "small_bufsize": 8192, 00:43:54.228 "large_bufsize": 135168, 00:43:54.228 "enable_numa": false 00:43:54.228 } 00:43:54.228 } 00:43:54.228 ] 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "subsystem": "sock", 00:43:54.228 "config": [ 00:43:54.228 { 00:43:54.228 "method": "sock_set_default_impl", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "posix" 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "sock_impl_set_options", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "ssl", 00:43:54.228 "recv_buf_size": 4096, 00:43:54.228 "send_buf_size": 4096, 00:43:54.228 "enable_recv_pipe": true, 00:43:54.228 "enable_quickack": false, 00:43:54.228 "enable_placement_id": 0, 00:43:54.228 "enable_zerocopy_send_server": true, 00:43:54.228 "enable_zerocopy_send_client": false, 00:43:54.228 "zerocopy_threshold": 0, 00:43:54.228 "tls_version": 0, 00:43:54.228 "enable_ktls": false 00:43:54.228 } 00:43:54.228 }, 00:43:54.228 { 00:43:54.228 "method": "sock_impl_set_options", 00:43:54.228 "params": { 00:43:54.228 "impl_name": "posix", 00:43:54.228 "recv_buf_size": 2097152, 00:43:54.228 "send_buf_size": 2097152, 00:43:54.228 "enable_recv_pipe": true, 00:43:54.228 "enable_quickack": false, 00:43:54.228 "enable_placement_id": 0, 00:43:54.228 "enable_zerocopy_send_server": true, 00:43:54.228 "enable_zerocopy_send_client": false, 00:43:54.228 "zerocopy_threshold": 0, 00:43:54.228 "tls_version": 0, 00:43:54.229 "enable_ktls": false 00:43:54.229 } 00:43:54.229 } 00:43:54.229 ] 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "subsystem": "vmd", 00:43:54.229 "config": [] 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "subsystem": "accel", 00:43:54.229 "config": [ 00:43:54.229 { 00:43:54.229 "method": "accel_set_options", 00:43:54.229 "params": { 00:43:54.229 "small_cache_size": 128, 00:43:54.229 "large_cache_size": 16, 00:43:54.229 "task_count": 2048, 00:43:54.229 "sequence_count": 2048, 00:43:54.229 "buf_count": 2048 00:43:54.229 } 00:43:54.229 } 00:43:54.229 ] 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "subsystem": "bdev", 00:43:54.229 "config": [ 00:43:54.229 { 00:43:54.229 "method": "bdev_set_options", 00:43:54.229 "params": { 00:43:54.229 "bdev_io_pool_size": 65535, 00:43:54.229 "bdev_io_cache_size": 256, 00:43:54.229 "bdev_auto_examine": true, 00:43:54.229 "iobuf_small_cache_size": 128, 00:43:54.229 "iobuf_large_cache_size": 16 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_raid_set_options", 00:43:54.229 "params": { 00:43:54.229 "process_window_size_kb": 1024, 00:43:54.229 "process_max_bandwidth_mb_sec": 0 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_iscsi_set_options", 00:43:54.229 "params": { 00:43:54.229 "timeout_sec": 30 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_nvme_set_options", 00:43:54.229 "params": { 00:43:54.229 "action_on_timeout": "none", 00:43:54.229 "timeout_us": 0, 00:43:54.229 "timeout_admin_us": 0, 00:43:54.229 "keep_alive_timeout_ms": 10000, 00:43:54.229 "arbitration_burst": 0, 00:43:54.229 "low_priority_weight": 0, 00:43:54.229 "medium_priority_weight": 0, 00:43:54.229 "high_priority_weight": 0, 00:43:54.229 "nvme_adminq_poll_period_us": 10000, 00:43:54.229 "nvme_ioq_poll_period_us": 0, 00:43:54.229 "io_queue_requests": 512, 00:43:54.229 "delay_cmd_submit": true, 00:43:54.229 "transport_retry_count": 4, 00:43:54.229 "bdev_retry_count": 3, 00:43:54.229 "transport_ack_timeout": 0, 00:43:54.229 "ctrlr_loss_timeout_sec": 0, 00:43:54.229 "reconnect_delay_sec": 0, 00:43:54.229 "fast_io_fail_timeout_sec": 0, 00:43:54.229 "disable_auto_failback": false, 00:43:54.229 "generate_uuids": false, 00:43:54.229 "transport_tos": 0, 00:43:54.229 "nvme_error_stat": false, 00:43:54.229 "rdma_srq_size": 0, 00:43:54.229 "io_path_stat": false, 00:43:54.229 "allow_accel_sequence": false, 00:43:54.229 "rdma_max_cq_size": 0, 00:43:54.229 "rdma_cm_event_timeout_ms": 0, 00:43:54.229 "dhchap_digests": [ 00:43:54.229 "sha256", 00:43:54.229 "sha384", 00:43:54.229 "sha512" 00:43:54.229 ], 00:43:54.229 "dhchap_dhgroups": [ 00:43:54.229 "null", 00:43:54.229 "ffdhe2048", 00:43:54.229 "ffdhe3072", 00:43:54.229 "ffdhe4096", 00:43:54.229 "ffdhe6144", 00:43:54.229 "ffdhe8192" 00:43:54.229 ], 00:43:54.229 "rdma_umr_per_io": false 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_nvme_attach_controller", 00:43:54.229 "params": { 00:43:54.229 "name": "nvme0", 00:43:54.229 "trtype": "TCP", 00:43:54.229 "adrfam": "IPv4", 00:43:54.229 "traddr": "127.0.0.1", 00:43:54.229 "trsvcid": "4420", 00:43:54.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:54.229 "prchk_reftag": false, 00:43:54.229 "prchk_guard": false, 00:43:54.229 "ctrlr_loss_timeout_sec": 0, 00:43:54.229 "reconnect_delay_sec": 0, 00:43:54.229 "fast_io_fail_timeout_sec": 0, 00:43:54.229 "psk": "key0", 00:43:54.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:54.229 "hdgst": false, 00:43:54.229 "ddgst": false, 00:43:54.229 "multipath": "multipath" 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_nvme_set_hotplug", 00:43:54.229 "params": { 00:43:54.229 "period_us": 100000, 00:43:54.229 "enable": false 00:43:54.229 } 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "method": "bdev_wait_for_examine" 00:43:54.229 } 00:43:54.229 ] 00:43:54.229 }, 00:43:54.229 { 00:43:54.229 "subsystem": "nbd", 00:43:54.229 "config": [] 00:43:54.229 } 00:43:54.229 ] 00:43:54.229 }' 00:43:54.229 13:24:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:54.229 13:24:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:54.229 13:24:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:54.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:54.229 13:24:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:54.229 13:24:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:54.488 [2024-12-15 13:24:02.150770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:54.488 [2024-12-15 13:24:02.150847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326806 ] 00:43:54.488 [2024-12-15 13:24:02.225900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.488 [2024-12-15 13:24:02.246338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:54.747 [2024-12-15 13:24:02.402245] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:55.315 13:24:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:55.315 13:24:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:55.315 13:24:02 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:55.315 13:24:02 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:55.315 13:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:55.315 13:24:03 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:55.315 13:24:03 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:55.315 13:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:55.315 13:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:55.315 13:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:55.315 13:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:55.315 13:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:55.574 13:24:03 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:55.574 13:24:03 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:55.574 13:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:55.574 13:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:55.574 13:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:55.574 13:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:55.574 13:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:55.833 13:24:03 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:55.833 13:24:03 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:55.833 13:24:03 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:55.833 13:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:56.092 13:24:03 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:56.092 13:24:03 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:56.092 13:24:03 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nQ1wM7HyQE /tmp/tmp.mkPl4NXTUO 00:43:56.092 13:24:03 keyring_file -- keyring/file.sh@20 -- # killprocess 1326806 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1326806 ']' 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1326806 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326806 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326806' 00:43:56.092 killing process with pid 1326806 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@973 -- # kill 1326806 00:43:56.092 Received shutdown signal, test time was about 1.000000 seconds 00:43:56.092 00:43:56.092 Latency(us) 00:43:56.092 [2024-12-15T12:24:03.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:56.092 [2024-12-15T12:24:03.999Z] =================================================================================================================== 00:43:56.092 [2024-12-15T12:24:03.999Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@978 -- # wait 1326806 00:43:56.092 13:24:03 keyring_file -- keyring/file.sh@21 -- # killprocess 1325185 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325185 ']' 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325185 00:43:56.092 13:24:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:56.351 13:24:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.351 13:24:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325185 00:43:56.351 13:24:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.351 13:24:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.351 13:24:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325185' 00:43:56.351 killing process with pid 1325185 00:43:56.351 13:24:04 keyring_file -- common/autotest_common.sh@973 -- # kill 1325185 00:43:56.351 13:24:04 keyring_file -- common/autotest_common.sh@978 -- # wait 1325185 00:43:56.610 00:43:56.610 real 0m11.776s 00:43:56.610 user 0m29.344s 00:43:56.610 sys 0m2.701s 00:43:56.610 13:24:04 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:56.610 13:24:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:56.610 ************************************ 00:43:56.610 END TEST keyring_file 00:43:56.610 ************************************ 00:43:56.610 13:24:04 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:56.610 13:24:04 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:56.610 13:24:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:56.610 13:24:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:56.610 13:24:04 -- common/autotest_common.sh@10 -- # set +x 00:43:56.610 ************************************ 00:43:56.610 START TEST keyring_linux 00:43:56.610 ************************************ 00:43:56.610 13:24:04 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:56.610 Joined session keyring: 419420631 00:43:56.610 * Looking for test storage... 00:43:56.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:56.610 13:24:04 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:56.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.871 --rc genhtml_branch_coverage=1 00:43:56.871 --rc genhtml_function_coverage=1 00:43:56.871 --rc genhtml_legend=1 00:43:56.871 --rc geninfo_all_blocks=1 00:43:56.871 --rc geninfo_unexecuted_blocks=1 00:43:56.871 00:43:56.871 ' 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:56.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.871 --rc genhtml_branch_coverage=1 00:43:56.871 --rc genhtml_function_coverage=1 00:43:56.871 --rc genhtml_legend=1 00:43:56.871 --rc geninfo_all_blocks=1 00:43:56.871 --rc geninfo_unexecuted_blocks=1 00:43:56.871 00:43:56.871 ' 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:56.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.871 --rc genhtml_branch_coverage=1 00:43:56.871 --rc genhtml_function_coverage=1 00:43:56.871 --rc genhtml_legend=1 00:43:56.871 --rc geninfo_all_blocks=1 00:43:56.871 --rc geninfo_unexecuted_blocks=1 00:43:56.871 00:43:56.871 ' 00:43:56.871 13:24:04 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:56.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.871 --rc genhtml_branch_coverage=1 00:43:56.871 --rc genhtml_function_coverage=1 00:43:56.871 --rc genhtml_legend=1 00:43:56.871 --rc geninfo_all_blocks=1 00:43:56.871 --rc geninfo_unexecuted_blocks=1 00:43:56.871 00:43:56.871 ' 00:43:56.871 13:24:04 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:56.871 13:24:04 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.871 13:24:04 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.871 13:24:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.871 13:24:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.871 13:24:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.871 13:24:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:56.871 13:24:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:56.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:56.871 13:24:04 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:56.872 /tmp/:spdk-test:key0 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:56.872 13:24:04 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:56.872 13:24:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:56.872 /tmp/:spdk-test:key1 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1327347 00:43:56.872 13:24:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1327347 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327347 ']' 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:56.872 13:24:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:56.872 [2024-12-15 13:24:04.750727] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:56.872 [2024-12-15 13:24:04.750775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327347 ] 00:43:57.131 [2024-12-15 13:24:04.824665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.131 [2024-12-15 13:24:04.847591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:57.389 [2024-12-15 13:24:05.048278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.389 null0 00:43:57.389 [2024-12-15 13:24:05.080333] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:57.389 [2024-12-15 13:24:05.080620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:57.389 632365749 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:57.389 114096801 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1327358 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1327358 /var/tmp/bperf.sock 00:43:57.389 13:24:05 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327358 ']' 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:57.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.389 13:24:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:57.390 [2024-12-15 13:24:05.151126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:57.390 [2024-12-15 13:24:05.151168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327358 ] 00:43:57.390 [2024-12-15 13:24:05.225669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:57.390 [2024-12-15 13:24:05.248132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:57.648 13:24:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.648 13:24:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:57.648 13:24:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:57.648 13:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:57.648 13:24:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:57.648 13:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:57.906 13:24:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:57.906 13:24:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:58.165 [2024-12-15 13:24:05.932049] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:58.165 nvme0n1 00:43:58.165 13:24:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:58.165 13:24:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:58.165 13:24:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:58.165 13:24:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:58.165 13:24:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.165 13:24:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:58.423 13:24:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:58.423 13:24:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:58.423 13:24:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:58.423 13:24:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:58.423 13:24:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:58.423 13:24:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:58.423 13:24:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@25 -- # sn=632365749 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 632365749 == \6\3\2\3\6\5\7\4\9 ]] 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 632365749 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:58.683 13:24:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:58.683 Running I/O for 1 seconds... 00:44:00.061 21524.00 IOPS, 84.08 MiB/s 00:44:00.061 Latency(us) 00:44:00.061 [2024-12-15T12:24:07.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:00.061 nvme0n1 : 1.01 21525.58 84.08 0.00 0.00 5927.10 4868.39 10360.93 00:44:00.061 [2024-12-15T12:24:07.968Z] =================================================================================================================== 00:44:00.061 [2024-12-15T12:24:07.968Z] Total : 21525.58 84.08 0.00 0.00 5927.10 4868.39 10360.93 00:44:00.061 { 00:44:00.061 "results": [ 00:44:00.061 { 00:44:00.061 "job": "nvme0n1", 00:44:00.061 "core_mask": "0x2", 00:44:00.061 "workload": "randread", 00:44:00.061 "status": "finished", 00:44:00.061 "queue_depth": 128, 00:44:00.061 "io_size": 4096, 00:44:00.061 "runtime": 1.005873, 00:44:00.061 "iops": 21525.580267091373, 00:44:00.061 "mibps": 84.08429791832567, 00:44:00.061 "io_failed": 0, 00:44:00.061 "io_timeout": 0, 00:44:00.061 "avg_latency_us": 5927.097534858762, 00:44:00.061 "min_latency_us": 4868.388571428572, 00:44:00.061 "max_latency_us": 10360.929523809524 00:44:00.061 } 00:44:00.061 ], 00:44:00.061 "core_count": 1 00:44:00.061 } 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:00.061 13:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:00.061 13:24:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:00.061 13:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.320 13:24:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:00.320 13:24:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:00.320 13:24:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:00.320 13:24:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:00.320 13:24:07 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:00.320 13:24:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:00.320 [2024-12-15 13:24:08.169170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:00.320 [2024-12-15 13:24:08.169831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f63d0 (107): Transport endpoint is not connected 00:44:00.320 [2024-12-15 13:24:08.170820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f63d0 (9): Bad file descriptor 00:44:00.320 [2024-12-15 13:24:08.171822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:00.320 [2024-12-15 13:24:08.171837] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:00.320 [2024-12-15 13:24:08.171850] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:00.320 [2024-12-15 13:24:08.171859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:00.320 request: 00:44:00.320 { 00:44:00.320 "name": "nvme0", 00:44:00.320 "trtype": "tcp", 00:44:00.320 "traddr": "127.0.0.1", 00:44:00.320 "adrfam": "ipv4", 00:44:00.320 "trsvcid": "4420", 00:44:00.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:00.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:00.320 "prchk_reftag": false, 00:44:00.320 "prchk_guard": false, 00:44:00.320 "hdgst": false, 00:44:00.320 "ddgst": false, 00:44:00.320 "psk": ":spdk-test:key1", 00:44:00.320 "allow_unrecognized_csi": false, 00:44:00.320 "method": "bdev_nvme_attach_controller", 00:44:00.320 "req_id": 1 00:44:00.320 } 00:44:00.320 Got JSON-RPC error response 00:44:00.320 response: 00:44:00.320 { 00:44:00.320 "code": -5, 00:44:00.320 "message": "Input/output error" 00:44:00.320 } 00:44:00.320 13:24:08 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:00.320 13:24:08 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@33 -- # sn=632365749 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 632365749 00:44:00.321 1 links removed 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@33 -- # sn=114096801 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 114096801 00:44:00.321 1 links removed 00:44:00.321 13:24:08 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1327358 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327358 ']' 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327358 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:00.321 13:24:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327358 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327358' 00:44:00.580 killing process with pid 1327358 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327358 00:44:00.580 Received shutdown signal, test time was about 1.000000 seconds 00:44:00.580 00:44:00.580 Latency(us) 00:44:00.580 [2024-12-15T12:24:08.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.580 [2024-12-15T12:24:08.487Z] =================================================================================================================== 00:44:00.580 [2024-12-15T12:24:08.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327358 00:44:00.580 13:24:08 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1327347 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327347 ']' 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327347 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327347 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327347' 00:44:00.580 killing process with pid 1327347 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327347 00:44:00.580 13:24:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327347 00:44:01.148 00:44:01.148 real 0m4.335s 00:44:01.148 user 0m8.290s 00:44:01.148 sys 0m1.445s 00:44:01.148 13:24:08 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:01.148 13:24:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:01.148 ************************************ 00:44:01.148 END TEST keyring_linux 00:44:01.148 ************************************ 00:44:01.148 13:24:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:01.148 13:24:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:01.148 13:24:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:01.148 13:24:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:01.148 13:24:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:01.148 13:24:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:01.148 13:24:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:01.148 13:24:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:01.148 13:24:08 -- common/autotest_common.sh@10 -- # set +x 00:44:01.148 13:24:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:01.148 13:24:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:01.148 13:24:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:01.148 13:24:08 -- common/autotest_common.sh@10 -- # set +x 00:44:06.554 INFO: APP EXITING 00:44:06.554 INFO: killing all VMs 00:44:06.554 INFO: killing vhost app 00:44:06.554 INFO: EXIT DONE 00:44:09.842 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:09.842 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:09.842 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:12.377 Cleaning 00:44:12.377 Removing: /var/run/dpdk/spdk0/config 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:12.377 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:12.377 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:12.377 Removing: /var/run/dpdk/spdk1/config 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:12.377 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:12.636 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:12.636 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:12.636 Removing: /var/run/dpdk/spdk2/config 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:12.636 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:12.636 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:12.636 Removing: /var/run/dpdk/spdk3/config 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:12.636 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:12.636 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:12.636 Removing: /var/run/dpdk/spdk4/config 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:12.636 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:12.636 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:12.636 Removing: /dev/shm/bdev_svc_trace.1 00:44:12.636 Removing: /dev/shm/nvmf_trace.0 00:44:12.636 Removing: /dev/shm/spdk_tgt_trace.pid772118 00:44:12.636 Removing: /var/run/dpdk/spdk0 00:44:12.636 Removing: /var/run/dpdk/spdk1 00:44:12.636 Removing: /var/run/dpdk/spdk2 00:44:12.636 Removing: /var/run/dpdk/spdk3 00:44:12.636 Removing: /var/run/dpdk/spdk4 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1010019 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1014427 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1016153 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1017816 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1018003 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1018204 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1018245 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1018737 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1020522 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1021269 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1021753 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1024405 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1024797 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1025502 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1029473 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1034796 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1034798 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1034800 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1038669 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1042367 00:44:12.636 Removing: /var/run/dpdk/spdk_pid1047243 00:44:12.895 Removing: /var/run/dpdk/spdk_pid1082501 00:44:12.895 Removing: /var/run/dpdk/spdk_pid1086544 00:44:12.895 Removing: /var/run/dpdk/spdk_pid1092404 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1093480 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1094763 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1096058 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1100664 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1105054 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1109393 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1116632 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1116634 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1121220 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1121361 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1121532 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1121944 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1122114 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1123480 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1125073 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1126629 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1128273 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1129953 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1131511 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1137267 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1137818 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1139624 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1140598 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1146345 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1149326 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1154607 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1159836 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1168289 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1175158 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1175212 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1193582 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1194045 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1195105 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1195573 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1196283 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1196754 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1197213 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1197840 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1201838 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1202063 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1208007 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1208058 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1213438 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1217585 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1226887 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1227553 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1231669 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1231968 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1235934 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1242070 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1244467 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1254215 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1262875 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1264477 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1265377 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1281213 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1284956 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1288103 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1295676 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1295681 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1300625 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1302534 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1304457 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1305615 00:44:12.896 Removing: /var/run/dpdk/spdk_pid1307605 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1308645 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1317215 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1317661 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1318107 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1320343 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1320888 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1321431 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1325185 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1325201 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1326806 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1327347 00:44:13.155 Removing: /var/run/dpdk/spdk_pid1327358 00:44:13.155 Removing: /var/run/dpdk/spdk_pid770030 00:44:13.155 Removing: /var/run/dpdk/spdk_pid771064 00:44:13.155 Removing: /var/run/dpdk/spdk_pid772118 00:44:13.155 Removing: /var/run/dpdk/spdk_pid772739 00:44:13.155 Removing: /var/run/dpdk/spdk_pid773661 00:44:13.155 Removing: /var/run/dpdk/spdk_pid773764 00:44:13.155 Removing: /var/run/dpdk/spdk_pid774805 00:44:13.155 Removing: /var/run/dpdk/spdk_pid774857 00:44:13.155 Removing: /var/run/dpdk/spdk_pid775205 00:44:13.155 Removing: /var/run/dpdk/spdk_pid776692 00:44:13.155 Removing: /var/run/dpdk/spdk_pid778072 00:44:13.155 Removing: /var/run/dpdk/spdk_pid778816 00:44:13.155 Removing: /var/run/dpdk/spdk_pid779069 00:44:13.155 Removing: /var/run/dpdk/spdk_pid779317 00:44:13.155 Removing: /var/run/dpdk/spdk_pid779604 00:44:13.155 Removing: /var/run/dpdk/spdk_pid779853 00:44:13.155 Removing: /var/run/dpdk/spdk_pid780095 00:44:13.155 Removing: /var/run/dpdk/spdk_pid780372 00:44:13.155 Removing: /var/run/dpdk/spdk_pid781092 00:44:13.155 Removing: /var/run/dpdk/spdk_pid784022 00:44:13.155 Removing: /var/run/dpdk/spdk_pid784271 00:44:13.155 Removing: /var/run/dpdk/spdk_pid784520 00:44:13.155 Removing: /var/run/dpdk/spdk_pid784530 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785006 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785012 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785494 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785502 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785778 00:44:13.155 Removing: /var/run/dpdk/spdk_pid785840 00:44:13.155 Removing: /var/run/dpdk/spdk_pid786035 00:44:13.155 Removing: /var/run/dpdk/spdk_pid786173 00:44:13.155 Removing: /var/run/dpdk/spdk_pid786600 00:44:13.155 Removing: /var/run/dpdk/spdk_pid786841 00:44:13.155 Removing: /var/run/dpdk/spdk_pid787129 00:44:13.155 Removing: /var/run/dpdk/spdk_pid790867 00:44:13.155 Removing: /var/run/dpdk/spdk_pid795187 00:44:13.155 Removing: /var/run/dpdk/spdk_pid805169 00:44:13.156 Removing: /var/run/dpdk/spdk_pid805717 00:44:13.156 Removing: /var/run/dpdk/spdk_pid809987 00:44:13.156 Removing: /var/run/dpdk/spdk_pid810373 00:44:13.156 Removing: /var/run/dpdk/spdk_pid814569 00:44:13.156 Removing: /var/run/dpdk/spdk_pid820333 00:44:13.156 Removing: /var/run/dpdk/spdk_pid823576 00:44:13.156 Removing: /var/run/dpdk/spdk_pid833607 00:44:13.156 Removing: /var/run/dpdk/spdk_pid842395 00:44:13.156 Removing: /var/run/dpdk/spdk_pid844153 00:44:13.156 Removing: /var/run/dpdk/spdk_pid845059 00:44:13.156 Removing: /var/run/dpdk/spdk_pid861799 00:44:13.156 Removing: /var/run/dpdk/spdk_pid865808 00:44:13.156 Removing: /var/run/dpdk/spdk_pid946961 00:44:13.156 Removing: /var/run/dpdk/spdk_pid952595 00:44:13.156 Removing: /var/run/dpdk/spdk_pid958442 00:44:13.156 Removing: /var/run/dpdk/spdk_pid964606 00:44:13.156 Removing: /var/run/dpdk/spdk_pid964617 00:44:13.156 Removing: /var/run/dpdk/spdk_pid965499 00:44:13.156 Removing: /var/run/dpdk/spdk_pid966386 00:44:13.156 Removing: /var/run/dpdk/spdk_pid967275 00:44:13.415 Removing: /var/run/dpdk/spdk_pid967729 00:44:13.415 Removing: /var/run/dpdk/spdk_pid967822 00:44:13.415 Removing: /var/run/dpdk/spdk_pid968135 00:44:13.415 Removing: /var/run/dpdk/spdk_pid968182 00:44:13.415 Removing: /var/run/dpdk/spdk_pid968184 00:44:13.415 Removing: /var/run/dpdk/spdk_pid969080 00:44:13.415 Removing: /var/run/dpdk/spdk_pid969965 00:44:13.415 Removing: /var/run/dpdk/spdk_pid970855 00:44:13.415 Removing: /var/run/dpdk/spdk_pid971315 00:44:13.415 Removing: /var/run/dpdk/spdk_pid971320 00:44:13.415 Removing: /var/run/dpdk/spdk_pid971634 00:44:13.415 Removing: /var/run/dpdk/spdk_pid972744 00:44:13.415 Removing: /var/run/dpdk/spdk_pid973705 00:44:13.415 Removing: /var/run/dpdk/spdk_pid981761 00:44:13.415 Clean 00:44:13.415 13:24:21 -- common/autotest_common.sh@1453 -- # return 0 00:44:13.415 13:24:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:13.415 13:24:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:13.415 13:24:21 -- common/autotest_common.sh@10 -- # set +x 00:44:13.415 13:24:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:13.415 13:24:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:13.415 13:24:21 -- common/autotest_common.sh@10 -- # set +x 00:44:13.415 13:24:21 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:13.415 13:24:21 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:13.415 13:24:21 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:13.415 13:24:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:13.415 13:24:21 -- spdk/autotest.sh@398 -- # hostname 00:44:13.415 13:24:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:13.674 geninfo: WARNING: invalid characters removed from testname! 00:44:35.610 13:24:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:36.987 13:24:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:38.892 13:24:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:40.796 13:24:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:42.699 13:24:50 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:44.603 13:24:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:45.979 13:24:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:46.238 13:24:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:46.238 13:24:53 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:46.238 13:24:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:46.238 13:24:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:46.238 13:24:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:46.238 + [[ -n 675141 ]] 00:44:46.238 + sudo kill 675141 00:44:46.248 [Pipeline] } 00:44:46.263 [Pipeline] // stage 00:44:46.269 [Pipeline] } 00:44:46.283 [Pipeline] // timeout 00:44:46.289 [Pipeline] } 00:44:46.303 [Pipeline] // catchError 00:44:46.309 [Pipeline] } 00:44:46.323 [Pipeline] // wrap 00:44:46.329 [Pipeline] } 00:44:46.342 [Pipeline] // catchError 00:44:46.352 [Pipeline] stage 00:44:46.354 [Pipeline] { (Epilogue) 00:44:46.367 [Pipeline] catchError 00:44:46.369 [Pipeline] { 00:44:46.382 [Pipeline] echo 00:44:46.384 Cleanup processes 00:44:46.389 [Pipeline] sh 00:44:46.676 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:46.676 1339479 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:46.689 [Pipeline] sh 00:44:46.974 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:46.974 ++ grep -v 'sudo pgrep' 00:44:46.974 ++ awk '{print $1}' 00:44:46.974 + sudo kill -9 00:44:46.974 + true 00:44:46.985 [Pipeline] sh 00:44:47.269 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:59.488 [Pipeline] sh 00:44:59.773 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:59.773 Artifacts sizes are good 00:44:59.787 [Pipeline] archiveArtifacts 00:44:59.794 Archiving artifacts 00:44:59.944 [Pipeline] sh 00:45:00.229 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:00.243 [Pipeline] cleanWs 00:45:00.253 [WS-CLEANUP] Deleting project workspace... 00:45:00.254 [WS-CLEANUP] Deferred wipeout is used... 00:45:00.260 [WS-CLEANUP] done 00:45:00.262 [Pipeline] } 00:45:00.279 [Pipeline] // catchError 00:45:00.290 [Pipeline] sh 00:45:00.572 + logger -p user.info -t JENKINS-CI 00:45:00.582 [Pipeline] } 00:45:00.595 [Pipeline] // stage 00:45:00.600 [Pipeline] } 00:45:00.614 [Pipeline] // node 00:45:00.619 [Pipeline] End of Pipeline 00:45:00.665 Finished: SUCCESS